Jan 20 01:38:22.601936 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 19 22:31:13 -00 2026 Jan 20 01:38:22.618292 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=62ba7e41f0b11e3efb63e9a03ee9de0b370deb0ea547dd39e8d3060b03ecf9e8 Jan 20 01:38:22.618335 kernel: BIOS-provided physical RAM map: Jan 20 01:38:22.618347 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 20 01:38:22.618359 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 20 01:38:22.618367 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 20 01:38:22.618380 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 20 01:38:22.618392 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 20 01:38:22.618442 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 20 01:38:22.618456 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 20 01:38:22.618476 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 01:38:22.618486 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 20 01:38:22.618498 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 01:38:22.618509 kernel: NX (Execute Disable) protection: active Jan 20 01:38:22.618522 kernel: APIC: Static calls initialized Jan 20 01:38:22.618536 kernel: SMBIOS 2.8 present. Jan 20 01:38:22.618581 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 20 01:38:22.618592 kernel: DMI: Memory slots populated: 1/1 Jan 20 01:38:22.618601 kernel: Hypervisor detected: KVM Jan 20 01:38:22.618612 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 20 01:38:22.618622 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 01:38:22.618632 kernel: kvm-clock: using sched offset of 82818704970 cycles Jan 20 01:38:22.618645 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 01:38:22.618656 kernel: tsc: Detected 2445.426 MHz processor Jan 20 01:38:22.618672 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 01:38:22.618683 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 01:38:22.618694 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 20 01:38:22.618704 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 20 01:38:22.618715 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 01:38:22.618725 kernel: Using GB pages for direct mapping Jan 20 01:38:22.618736 kernel: ACPI: Early table checksum verification disabled Jan 20 01:38:22.618750 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 20 01:38:22.618760 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:38:22.618770 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:38:22.618781 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:38:22.618791 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 20 01:38:22.618802 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:38:22.618813 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:38:22.618827 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:38:22.618837 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:38:22.618853 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 20 01:38:22.618864 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 20 01:38:22.618875 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 20 01:38:22.618889 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 20 01:38:22.618900 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 20 01:38:22.618911 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 20 01:38:22.618922 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 20 01:38:22.618933 kernel: No NUMA configuration found Jan 20 01:38:22.618944 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 20 01:38:22.618958 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 20 01:38:22.618970 kernel: Zone ranges: Jan 20 01:38:22.618981 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 01:38:22.618992 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 20 01:38:22.619003 kernel: Normal empty Jan 20 01:38:22.619015 kernel: Device empty Jan 20 01:38:22.619026 kernel: Movable zone start for each node Jan 20 01:38:22.619038 kernel: Early memory node ranges Jan 20 01:38:22.619053 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 20 01:38:22.619065 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 20 01:38:22.619076 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 20 01:38:22.626212 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 01:38:22.626244 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 20 01:38:22.626288 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 20 01:38:22.626302 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 01:38:22.626323 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 01:38:22.626334 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 01:38:22.626345 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 01:38:22.626390 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 01:38:22.626403 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 01:38:22.626414 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 01:38:22.626426 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 01:38:22.626441 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 01:38:22.626452 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 20 01:38:22.626463 kernel: TSC deadline timer available Jan 20 01:38:22.626474 kernel: CPU topo: Max. logical packages: 1 Jan 20 01:38:22.626485 kernel: CPU topo: Max. logical dies: 1 Jan 20 01:38:22.626495 kernel: CPU topo: Max. dies per package: 1 Jan 20 01:38:22.626506 kernel: CPU topo: Max. threads per core: 1 Jan 20 01:38:22.626520 kernel: CPU topo: Num. cores per package: 4 Jan 20 01:38:22.626530 kernel: CPU topo: Num. threads per package: 4 Jan 20 01:38:22.626541 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 20 01:38:22.626551 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 01:38:22.626562 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 20 01:38:22.626573 kernel: kvm-guest: setup PV sched yield Jan 20 01:38:22.626583 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 20 01:38:22.626594 kernel: Booting paravirtualized kernel on KVM Jan 20 01:38:22.626608 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 01:38:22.626619 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 20 01:38:22.626630 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 20 01:38:22.626641 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 20 01:38:22.626651 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 20 01:38:22.626662 kernel: kvm-guest: PV spinlocks enabled Jan 20 01:38:22.626672 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 01:38:22.626688 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=62ba7e41f0b11e3efb63e9a03ee9de0b370deb0ea547dd39e8d3060b03ecf9e8 Jan 20 01:38:22.626699 kernel: random: crng init done Jan 20 01:38:22.626712 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 01:38:22.626725 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 01:38:22.626740 kernel: Fallback order for Node 0: 0 Jan 20 01:38:22.626754 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 20 01:38:22.626770 kernel: Policy zone: DMA32 Jan 20 01:38:22.626781 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 01:38:22.626792 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 20 01:38:22.626802 kernel: ftrace: allocating 40097 entries in 157 pages Jan 20 01:38:22.626812 kernel: ftrace: allocated 157 pages with 5 groups Jan 20 01:38:22.626823 kernel: Dynamic Preempt: voluntary Jan 20 01:38:22.626835 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 01:38:22.626851 kernel: rcu: RCU event tracing is enabled. Jan 20 01:38:22.626862 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 20 01:38:22.626873 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 01:38:22.626920 kernel: Rude variant of Tasks RCU enabled. Jan 20 01:38:22.626933 kernel: Tracing variant of Tasks RCU enabled. Jan 20 01:38:22.626944 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 01:38:22.626954 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 20 01:38:22.626966 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 01:38:22.626981 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 01:38:22.626992 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 01:38:22.627002 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 20 01:38:22.627013 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 01:38:22.627035 kernel: Console: colour VGA+ 80x25 Jan 20 01:38:22.627049 kernel: printk: legacy console [ttyS0] enabled Jan 20 01:38:22.627060 kernel: ACPI: Core revision 20240827 Jan 20 01:38:22.627071 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 20 01:38:22.627083 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 01:38:22.715662 kernel: x2apic enabled Jan 20 01:38:22.715697 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 01:38:22.715749 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 20 01:38:22.715763 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 20 01:38:22.715820 kernel: kvm-guest: setup PV IPIs Jan 20 01:38:22.715832 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 20 01:38:22.715844 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 01:38:22.715856 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 20 01:38:22.715868 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 01:38:22.715880 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 20 01:38:22.715891 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 20 01:38:22.715908 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 01:38:22.715922 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 01:38:22.715934 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 01:38:22.715945 kernel: Speculative Store Bypass: Vulnerable Jan 20 01:38:22.715958 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 20 01:38:22.715975 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 20 01:38:22.715993 kernel: active return thunk: srso_alias_return_thunk Jan 20 01:38:22.716004 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 20 01:38:22.716016 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 20 01:38:22.716027 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 01:38:22.716038 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 01:38:22.716050 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 01:38:22.716061 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 01:38:22.716076 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 01:38:22.716087 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 20 01:38:22.736228 kernel: Freeing SMP alternatives memory: 32K Jan 20 01:38:22.736253 kernel: pid_max: default: 32768 minimum: 301 Jan 20 01:38:22.736266 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 20 01:38:22.736279 kernel: landlock: Up and running. Jan 20 01:38:22.736292 kernel: SELinux: Initializing. Jan 20 01:38:22.736305 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 01:38:22.736369 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 01:38:22.736420 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 20 01:38:22.736434 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 20 01:38:22.736447 kernel: signal: max sigframe size: 1776 Jan 20 01:38:22.736460 kernel: rcu: Hierarchical SRCU implementation. Jan 20 01:38:22.736474 kernel: rcu: Max phase no-delay instances is 400. Jan 20 01:38:22.736487 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 20 01:38:22.736506 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 01:38:22.736520 kernel: smp: Bringing up secondary CPUs ... Jan 20 01:38:22.736532 kernel: smpboot: x86: Booting SMP configuration: Jan 20 01:38:22.736545 kernel: .... node #0, CPUs: #1 #2 #3 Jan 20 01:38:22.736558 kernel: smp: Brought up 1 node, 4 CPUs Jan 20 01:38:22.736571 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 20 01:38:22.736585 kernel: Memory: 2447340K/2571752K available (14336K kernel code, 2445K rwdata, 29896K rodata, 15436K init, 2604K bss, 118472K reserved, 0K cma-reserved) Jan 20 01:38:22.736601 kernel: devtmpfs: initialized Jan 20 01:38:22.736614 kernel: x86/mm: Memory block size: 128MB Jan 20 01:38:22.736626 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 01:38:22.736638 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 20 01:38:22.736649 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 01:38:22.736661 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 01:38:22.736672 kernel: audit: initializing netlink subsys (disabled) Jan 20 01:38:22.736688 kernel: audit: type=2000 audit(1768873037.658:1): state=initialized audit_enabled=0 res=1 Jan 20 01:38:22.736700 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 01:38:22.736711 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 01:38:22.736723 kernel: cpuidle: using governor menu Jan 20 01:38:22.736775 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 01:38:22.736787 kernel: dca service started, version 1.12.1 Jan 20 01:38:22.736799 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 20 01:38:22.736816 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 20 01:38:22.736828 kernel: PCI: Using configuration type 1 for base access Jan 20 01:38:22.736841 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 01:38:22.736852 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 01:38:22.736864 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 01:38:22.736876 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 01:38:22.736888 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 01:38:22.736904 kernel: ACPI: Added _OSI(Module Device) Jan 20 01:38:22.736915 kernel: ACPI: Added _OSI(Processor Device) Jan 20 01:38:22.736927 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 01:38:22.736939 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 01:38:22.736950 kernel: ACPI: Interpreter enabled Jan 20 01:38:22.736962 kernel: ACPI: PM: (supports S0 S3 S5) Jan 20 01:38:22.736976 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 01:38:22.736990 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 01:38:22.737003 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 01:38:22.737015 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 01:38:22.737027 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 01:38:22.789330 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 01:38:22.789733 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 20 01:38:22.790067 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 20 01:38:22.790220 kernel: PCI host bridge to bus 0000:00 Jan 20 01:38:22.790712 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 01:38:22.790979 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 01:38:22.791354 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 01:38:22.791596 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 20 01:38:22.791845 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 20 01:38:22.792086 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 20 01:38:22.792465 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 01:38:22.792892 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 20 01:38:22.793431 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 20 01:38:22.793726 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 20 01:38:22.794011 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 20 01:38:22.794429 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 20 01:38:22.794697 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 01:38:22.794961 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 17578 usecs Jan 20 01:38:22.795423 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 20 01:38:22.795695 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 20 01:38:22.795949 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 20 01:38:22.796366 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 20 01:38:22.796746 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 20 01:38:22.797040 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 20 01:38:22.797468 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 20 01:38:22.797729 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 20 01:38:22.798010 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 20 01:38:22.798443 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 20 01:38:22.798710 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 20 01:38:22.798970 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 20 01:38:22.799376 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 20 01:38:22.799717 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 20 01:38:22.800032 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 01:38:22.800471 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 20 01:38:22.800734 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 20 01:38:22.800992 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 20 01:38:22.801480 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 20 01:38:22.801741 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 20 01:38:22.801759 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 01:38:22.801771 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 01:38:22.801783 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 01:38:22.801794 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 01:38:22.801812 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 01:38:22.801823 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 01:38:22.801834 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 01:38:22.801846 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 01:38:22.801857 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 01:38:22.801868 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 01:38:22.801914 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 01:38:22.801933 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 01:38:22.801944 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 01:38:22.801955 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 01:38:22.801966 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 01:38:22.801977 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 01:38:22.801988 kernel: iommu: Default domain type: Translated Jan 20 01:38:22.801999 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 01:38:22.802014 kernel: PCI: Using ACPI for IRQ routing Jan 20 01:38:22.802026 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 01:38:22.802039 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 20 01:38:22.802050 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 20 01:38:22.802482 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 01:38:22.802745 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 01:38:22.803001 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 01:38:22.803021 kernel: vgaarb: loaded Jan 20 01:38:22.803033 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 20 01:38:22.803047 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 20 01:38:22.803059 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 01:38:22.803070 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 01:38:22.803082 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 01:38:22.803225 kernel: pnp: PnP ACPI init Jan 20 01:38:22.806370 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 20 01:38:22.806413 kernel: pnp: PnP ACPI: found 6 devices Jan 20 01:38:22.806426 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 01:38:22.806437 kernel: NET: Registered PF_INET protocol family Jan 20 01:38:22.806448 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 01:38:22.806459 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 01:38:22.806470 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 01:38:22.806487 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 01:38:22.806499 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 01:38:22.806510 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 01:38:22.806521 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 01:38:22.806532 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 01:38:22.806543 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 01:38:22.806555 kernel: NET: Registered PF_XDP protocol family Jan 20 01:38:22.806867 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 01:38:22.807268 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 01:38:22.807521 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 01:38:22.807766 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 20 01:38:22.810318 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 20 01:38:22.810651 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 20 01:38:22.810671 kernel: PCI: CLS 0 bytes, default 64 Jan 20 01:38:22.810693 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 01:38:22.810706 kernel: Initialise system trusted keyrings Jan 20 01:38:22.810718 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 01:38:22.810729 kernel: Key type asymmetric registered Jan 20 01:38:22.810740 kernel: Asymmetric key parser 'x509' registered Jan 20 01:38:22.810751 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 20 01:38:22.810762 kernel: io scheduler mq-deadline registered Jan 20 01:38:22.810776 kernel: io scheduler kyber registered Jan 20 01:38:22.810788 kernel: io scheduler bfq registered Jan 20 01:38:22.810799 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 01:38:22.810812 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 01:38:22.810824 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 01:38:22.810835 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 01:38:22.810846 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 01:38:22.810861 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 01:38:22.810873 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 01:38:22.810885 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 01:38:22.810896 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 01:38:22.811453 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 20 01:38:22.811473 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 20 01:38:22.811726 kernel: rtc_cmos 00:04: registered as rtc0 Jan 20 01:38:22.811983 kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T01:37:50 UTC (1768873070) Jan 20 01:38:22.812377 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 20 01:38:22.812397 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 20 01:38:22.812409 kernel: NET: Registered PF_INET6 protocol family Jan 20 01:38:22.812420 kernel: Segment Routing with IPv6 Jan 20 01:38:22.812432 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 01:38:22.812450 kernel: NET: Registered PF_PACKET protocol family Jan 20 01:38:22.812463 kernel: Key type dns_resolver registered Jan 20 01:38:22.812475 kernel: IPI shorthand broadcast: enabled Jan 20 01:38:22.812488 kernel: sched_clock: Marking stable (19883048809, 12002322376)->(36885140794, -4999769609) Jan 20 01:38:22.812500 kernel: registered taskstats version 1 Jan 20 01:38:22.812512 kernel: Loading compiled-in X.509 certificates Jan 20 01:38:22.812523 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: afdfbfc7519ef3fa38aa4389b822f24e81c62f9e' Jan 20 01:38:22.812538 kernel: Demotion targets for Node 0: null Jan 20 01:38:22.812549 kernel: Key type .fscrypt registered Jan 20 01:38:22.812560 kernel: Key type fscrypt-provisioning registered Jan 20 01:38:22.812571 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 01:38:22.812583 kernel: ima: Allocated hash algorithm: sha1 Jan 20 01:38:22.812595 kernel: ima: No architecture policies found Jan 20 01:38:22.812608 kernel: clk: Disabling unused clocks Jan 20 01:38:22.812623 kernel: Freeing unused kernel image (initmem) memory: 15436K Jan 20 01:38:22.812634 kernel: Write protecting the kernel read-only data: 45056k Jan 20 01:38:22.812645 kernel: Freeing unused kernel image (rodata/data gap) memory: 824K Jan 20 01:38:22.812656 kernel: Run /init as init process Jan 20 01:38:22.812667 kernel: with arguments: Jan 20 01:38:22.812679 kernel: /init Jan 20 01:38:22.812691 kernel: with environment: Jan 20 01:38:22.812703 kernel: HOME=/ Jan 20 01:38:22.812719 kernel: TERM=linux Jan 20 01:38:22.812731 kernel: SCSI subsystem initialized Jan 20 01:38:22.812742 kernel: libata version 3.00 loaded. Jan 20 01:38:22.813010 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 01:38:22.813029 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 01:38:22.813424 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 20 01:38:22.813696 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 20 01:38:22.813962 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 01:38:22.814502 kernel: scsi host0: ahci Jan 20 01:38:22.814849 kernel: scsi host1: ahci Jan 20 01:38:22.815325 kernel: scsi host2: ahci Jan 20 01:38:22.815692 kernel: scsi host3: ahci Jan 20 01:38:22.816030 kernel: scsi host4: ahci Jan 20 01:38:22.816527 kernel: scsi host5: ahci Jan 20 01:38:22.816546 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Jan 20 01:38:22.816559 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Jan 20 01:38:22.816570 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Jan 20 01:38:22.816589 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Jan 20 01:38:22.816601 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Jan 20 01:38:22.816613 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Jan 20 01:38:22.816625 kernel: hrtimer: interrupt took 31784266 ns Jan 20 01:38:22.816637 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 01:38:22.816648 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 20 01:38:22.816660 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 01:38:22.816671 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 20 01:38:22.816687 kernel: ata3.00: applying bridge limits Jan 20 01:38:22.816699 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 01:38:22.816710 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 01:38:22.816721 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 01:38:22.816733 kernel: ata3.00: configured for UDMA/100 Jan 20 01:38:22.816744 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 01:38:22.817220 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 20 01:38:22.817249 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 01:38:22.817602 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 20 01:38:22.817988 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 20 01:38:22.820778 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 20 01:38:22.820799 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 01:38:22.825830 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 20 01:38:22.825868 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 01:38:22.825881 kernel: GPT:16515071 != 27000831 Jan 20 01:38:22.825892 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 01:38:22.825904 kernel: GPT:16515071 != 27000831 Jan 20 01:38:22.825916 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 01:38:22.825928 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 01:38:22.825949 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 01:38:22.825962 kernel: device-mapper: uevent: version 1.0.3 Jan 20 01:38:22.825975 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 20 01:38:22.825987 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 20 01:38:22.826001 kernel: raid6: avx2x4 gen() 3801 MB/s Jan 20 01:38:22.826013 kernel: raid6: avx2x2 gen() 6839 MB/s Jan 20 01:38:22.826025 kernel: raid6: avx2x1 gen() 4152 MB/s Jan 20 01:38:22.826040 kernel: raid6: using algorithm avx2x2 gen() 6839 MB/s Jan 20 01:38:22.826052 kernel: raid6: .... xor() 8214 MB/s, rmw enabled Jan 20 01:38:22.826063 kernel: raid6: using avx2x2 recovery algorithm Jan 20 01:38:22.826075 kernel: xor: automatically using best checksumming function avx Jan 20 01:38:22.826087 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 01:38:22.826235 kernel: BTRFS: device fsid ca982954-e818-4158-83b7-102f75baa62c devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (181) Jan 20 01:38:22.826248 kernel: BTRFS info (device dm-0): first mount of filesystem ca982954-e818-4158-83b7-102f75baa62c Jan 20 01:38:22.826260 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 01:38:22.826271 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 01:38:22.826283 kernel: BTRFS info (device dm-0): enabling free space tree Jan 20 01:38:22.826295 kernel: loop: module loaded Jan 20 01:38:22.826307 kernel: loop0: detected capacity change from 0 to 100160 Jan 20 01:38:22.826324 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 01:38:22.826337 systemd[1]: Successfully made /usr/ read-only. Jan 20 01:38:22.826352 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 01:38:22.826365 systemd[1]: Detected virtualization kvm. Jan 20 01:38:22.826378 systemd[1]: Detected architecture x86-64. Jan 20 01:38:22.826391 systemd[1]: Running in initrd. Jan 20 01:38:22.826406 systemd[1]: No hostname configured, using default hostname. Jan 20 01:38:22.826419 systemd[1]: Hostname set to . Jan 20 01:38:22.826431 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 20 01:38:22.826444 systemd[1]: Queued start job for default target initrd.target. Jan 20 01:38:22.826456 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 01:38:22.826469 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:38:22.826486 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:38:22.826500 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 01:38:22.826512 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 01:38:22.826525 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 01:38:22.826538 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 01:38:22.826550 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:38:22.826566 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:38:22.826580 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 20 01:38:22.826592 systemd[1]: Reached target paths.target - Path Units. Jan 20 01:38:22.826607 systemd[1]: Reached target slices.target - Slice Units. Jan 20 01:38:22.826623 systemd[1]: Reached target swap.target - Swaps. Jan 20 01:38:22.826636 systemd[1]: Reached target timers.target - Timer Units. Jan 20 01:38:22.826648 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 01:38:22.826663 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 01:38:22.826676 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 01:38:22.826689 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 01:38:22.826702 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 20 01:38:22.826714 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:38:22.826726 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 01:38:22.826741 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:38:22.826753 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 01:38:22.826767 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 01:38:22.826780 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 01:38:22.826792 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 01:38:22.826804 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 01:38:22.826817 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 20 01:38:22.826833 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 01:38:22.826845 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 01:38:22.826858 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 01:38:22.826871 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:38:22.826887 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 01:38:22.826900 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:38:22.826912 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 01:38:22.826975 systemd-journald[320]: Collecting audit messages is enabled. Jan 20 01:38:22.827011 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 01:38:22.827026 systemd-journald[320]: Journal started Jan 20 01:38:22.827050 systemd-journald[320]: Runtime Journal (/run/log/journal/f6309e0c6bf5444b960ee69bfabc74cd) is 6M, max 48.2M, 42.2M free. Jan 20 01:38:22.893597 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 01:38:22.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:23.042441 kernel: audit: type=1130 audit(1768873102.952:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:23.496017 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 01:38:24.528366 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 01:38:24.625645 systemd-tmpfiles[333]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 20 01:38:24.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:24.699830 kernel: audit: type=1130 audit(1768873104.599:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:24.709764 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 01:38:25.696693 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:38:25.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:25.715472 kernel: audit: type=1130 audit(1768873105.697:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:25.840261 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 01:38:25.927307 kernel: Bridge firewalling registered Jan 20 01:38:25.931333 systemd-modules-load[321]: Inserted module 'br_netfilter' Jan 20 01:38:25.939353 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 01:38:26.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:26.569472 kernel: audit: type=1130 audit(1768873106.525:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:26.584661 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:38:26.727410 kernel: audit: type=1130 audit(1768873106.608:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:26.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:26.644678 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 01:38:26.707464 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 01:38:26.832059 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:38:26.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:26.924285 kernel: audit: type=1130 audit(1768873106.871:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:26.951536 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:38:27.068433 kernel: audit: type=1130 audit(1768873106.976:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:26.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:27.071742 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 01:38:27.119503 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:38:27.211736 kernel: audit: type=1130 audit(1768873107.146:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:27.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:27.239000 audit: BPF prog-id=6 op=LOAD Jan 20 01:38:27.254360 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 01:38:27.276522 kernel: audit: type=1334 audit(1768873107.239:10): prog-id=6 op=LOAD Jan 20 01:38:27.373836 dracut-cmdline[356]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=62ba7e41f0b11e3efb63e9a03ee9de0b370deb0ea547dd39e8d3060b03ecf9e8 Jan 20 01:38:27.592893 systemd-resolved[359]: Positive Trust Anchors: Jan 20 01:38:27.595301 systemd-resolved[359]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 01:38:27.601345 systemd-resolved[359]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 01:38:27.601390 systemd-resolved[359]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 01:38:27.866736 systemd-resolved[359]: Defaulting to hostname 'linux'. Jan 20 01:38:27.883434 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 01:38:27.985682 kernel: audit: type=1130 audit(1768873107.910:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:27.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:27.911672 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:38:28.526732 kernel: Loading iSCSI transport class v2.0-870. Jan 20 01:38:28.622553 kernel: iscsi: registered transport (tcp) Jan 20 01:38:28.738369 kernel: iscsi: registered transport (qla4xxx) Jan 20 01:38:28.738805 kernel: QLogic iSCSI HBA Driver Jan 20 01:38:29.110930 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 01:38:30.186862 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 01:38:30.337665 kernel: audit: type=1130 audit(1768873110.211:12): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:30.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:30.212541 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 01:38:30.925347 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 01:38:31.029465 kernel: audit: type=1130 audit(1768873110.940:13): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:30.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:30.989680 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 01:38:31.078681 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 01:38:31.437389 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 01:38:31.583611 kernel: audit: type=1130 audit(1768873111.467:14): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:31.583653 kernel: audit: type=1334 audit(1768873111.493:15): prog-id=7 op=LOAD Jan 20 01:38:31.583672 kernel: audit: type=1334 audit(1768873111.500:16): prog-id=8 op=LOAD Jan 20 01:38:31.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:31.493000 audit: BPF prog-id=7 op=LOAD Jan 20 01:38:31.500000 audit: BPF prog-id=8 op=LOAD Jan 20 01:38:31.510701 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:38:31.987952 systemd-udevd[584]: Using default interface naming scheme 'v257'. Jan 20 01:38:32.279959 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:38:32.516686 kernel: audit: type=1130 audit(1768873112.315:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:32.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:32.399621 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 01:38:32.772446 dracut-pre-trigger[645]: rd.md=0: removing MD RAID activation Jan 20 01:38:33.133433 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 01:38:33.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:33.170074 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 01:38:33.218036 kernel: audit: type=1130 audit(1768873113.169:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:33.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:33.300885 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 01:38:33.337674 kernel: audit: type=1130 audit(1768873113.239:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:33.337714 kernel: audit: type=1334 audit(1768873113.279:20): prog-id=9 op=LOAD Jan 20 01:38:33.279000 audit: BPF prog-id=9 op=LOAD Jan 20 01:38:33.416854 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 01:38:34.015667 systemd-networkd[731]: lo: Link UP Jan 20 01:38:34.015710 systemd-networkd[731]: lo: Gained carrier Jan 20 01:38:34.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:34.023571 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 01:38:34.173743 kernel: audit: type=1130 audit(1768873114.080:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:34.081491 systemd[1]: Reached target network.target - Network. Jan 20 01:38:34.400029 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:38:34.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:34.440491 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 01:38:35.583852 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 01:38:36.000065 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 01:38:36.246613 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 01:38:36.579473 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 01:38:36.935024 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 01:38:36.992807 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 01:38:37.127951 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:38:37.380284 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 01:38:37.408683 kernel: audit: type=1131 audit(1768873117.192:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:37.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:37.128197 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:38:37.196199 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:38:37.685587 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:38:37.830495 disk-uuid[774]: Primary Header is updated. Jan 20 01:38:37.830495 disk-uuid[774]: Secondary Entries is updated. Jan 20 01:38:37.830495 disk-uuid[774]: Secondary Header is updated. Jan 20 01:38:38.005299 kernel: AES CTR mode by8 optimization enabled Jan 20 01:38:38.716473 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 20 01:38:39.194707 systemd-networkd[731]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 01:38:39.277575 systemd-networkd[731]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 01:38:39.335059 systemd-networkd[731]: eth0: Link UP Jan 20 01:38:39.354701 systemd-networkd[731]: eth0: Gained carrier Jan 20 01:38:39.354733 systemd-networkd[731]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 01:38:39.588313 systemd-networkd[731]: eth0: DHCPv4 address 10.0.0.44/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 01:38:40.050646 disk-uuid[775]: Warning: The kernel is still using the old partition table. Jan 20 01:38:40.050646 disk-uuid[775]: The new table will be used at the next reboot or after you Jan 20 01:38:40.050646 disk-uuid[775]: run partprobe(8) or kpartx(8) Jan 20 01:38:40.050646 disk-uuid[775]: The operation has completed successfully. Jan 20 01:38:40.389387 systemd-networkd[731]: eth0: Gained IPv6LL Jan 20 01:38:40.839893 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 01:38:40.840233 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 01:38:41.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:41.701815 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 01:38:41.791217 kernel: audit: type=1130 audit(1768873121.690:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:41.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:41.849552 kernel: audit: type=1131 audit(1768873121.690:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:41.849648 kernel: audit: type=1130 audit(1768873121.752:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:41.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:41.912989 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:38:41.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:42.006634 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 01:38:42.172752 kernel: audit: type=1130 audit(1768873121.981:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:42.038493 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:38:42.064660 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 01:38:42.106556 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 01:38:42.119760 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 01:38:42.458851 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 01:38:42.569709 kernel: audit: type=1130 audit(1768873122.494:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:42.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:42.632849 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (867) Jan 20 01:38:42.684939 kernel: BTRFS info (device vda6): first mount of filesystem dd813f25-deee-45c4-bcdc-1fa4787873d8 Jan 20 01:38:42.685010 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 01:38:42.791733 kernel: BTRFS info (device vda6): turning on async discard Jan 20 01:38:42.791825 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 01:38:42.908781 kernel: BTRFS info (device vda6): last unmount of filesystem dd813f25-deee-45c4-bcdc-1fa4787873d8 Jan 20 01:38:42.969856 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 01:38:43.104562 kernel: audit: type=1130 audit(1768873122.997:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:42.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:43.025451 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 01:38:46.002487 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1103203441 wd_nsec: 1103202919 Jan 20 01:38:47.631846 ignition[888]: Ignition 2.22.0 Jan 20 01:38:47.631901 ignition[888]: Stage: fetch-offline Jan 20 01:38:47.637627 ignition[888]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:38:47.637663 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:38:47.639511 ignition[888]: parsed url from cmdline: "" Jan 20 01:38:47.639520 ignition[888]: no config URL provided Jan 20 01:38:47.639566 ignition[888]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 01:38:47.639592 ignition[888]: no config at "/usr/lib/ignition/user.ign" Jan 20 01:38:47.639973 ignition[888]: op(1): [started] loading QEMU firmware config module Jan 20 01:38:47.639982 ignition[888]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 20 01:38:48.211298 ignition[888]: op(1): [finished] loading QEMU firmware config module Jan 20 01:38:49.598816 ignition[888]: parsing config with SHA512: f8700d5b4bccbb923534991269a629bd6d56fcdfc00cd112b8f99524f2683f9c6a92505511627cc7c897950ebc22824bdd5860fbff79e07d8e79e7c09bd0ba11 Jan 20 01:38:49.718372 unknown[888]: fetched base config from "system" Jan 20 01:38:49.718392 unknown[888]: fetched user config from "qemu" Jan 20 01:38:49.744786 ignition[888]: fetch-offline: fetch-offline passed Jan 20 01:38:49.804032 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 01:38:49.904642 kernel: audit: type=1130 audit(1768873129.838:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:49.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:49.745221 ignition[888]: Ignition finished successfully Jan 20 01:38:49.839635 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 20 01:38:49.904360 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 01:38:51.522978 ignition[898]: Ignition 2.22.0 Jan 20 01:38:51.523055 ignition[898]: Stage: kargs Jan 20 01:38:51.529215 ignition[898]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:38:51.529237 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:38:51.543818 ignition[898]: kargs: kargs passed Jan 20 01:38:51.543969 ignition[898]: Ignition finished successfully Jan 20 01:38:51.720307 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 01:38:51.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:51.763705 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 01:38:51.881240 kernel: audit: type=1130 audit(1768873131.740:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:53.428950 ignition[906]: Ignition 2.22.0 Jan 20 01:38:53.429008 ignition[906]: Stage: disks Jan 20 01:38:53.431602 ignition[906]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:38:53.431621 ignition[906]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:38:53.507949 ignition[906]: disks: disks passed Jan 20 01:38:53.508078 ignition[906]: Ignition finished successfully Jan 20 01:38:53.605910 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 01:38:53.806242 kernel: audit: type=1130 audit(1768873133.738:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:53.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:53.788745 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 01:38:53.880984 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 01:38:53.986626 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 01:38:54.015824 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 01:38:54.050759 systemd[1]: Reached target basic.target - Basic System. Jan 20 01:38:54.198429 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 01:38:54.460981 systemd-fsck[916]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 20 01:38:54.525922 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 01:38:54.633884 kernel: audit: type=1130 audit(1768873134.545:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:54.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:54.624253 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 01:38:55.619271 kernel: EXT4-fs (vda9): mounted filesystem dbcb8eb1-a16c-4a1a-8ee4-d933bd0ee436 r/w with ordered data mode. Quota mode: none. Jan 20 01:38:55.626775 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 01:38:55.654692 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 01:38:55.716858 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 01:38:55.812891 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 01:38:55.902403 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (925) Jan 20 01:38:55.830498 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 01:38:55.985472 kernel: BTRFS info (device vda6): first mount of filesystem dd813f25-deee-45c4-bcdc-1fa4787873d8 Jan 20 01:38:55.985571 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 01:38:55.830619 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 01:38:56.042380 kernel: BTRFS info (device vda6): turning on async discard Jan 20 01:38:56.042423 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 01:38:55.830665 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 01:38:55.995439 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 01:38:56.085758 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 01:38:56.152418 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 01:38:56.728053 initrd-setup-root[949]: cut: /sysroot/etc/passwd: No such file or directory Jan 20 01:38:56.793665 initrd-setup-root[956]: cut: /sysroot/etc/group: No such file or directory Jan 20 01:38:56.871314 initrd-setup-root[963]: cut: /sysroot/etc/shadow: No such file or directory Jan 20 01:38:56.934498 initrd-setup-root[970]: cut: /sysroot/etc/gshadow: No such file or directory Jan 20 01:38:57.969922 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 01:38:57.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:58.009903 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 01:38:58.076475 kernel: audit: type=1130 audit(1768873137.996:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:58.067939 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 01:38:58.221282 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 01:38:58.259710 kernel: BTRFS info (device vda6): last unmount of filesystem dd813f25-deee-45c4-bcdc-1fa4787873d8 Jan 20 01:38:58.538519 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 01:38:58.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:58.687225 kernel: audit: type=1130 audit(1768873138.587:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:59.383209 ignition[1038]: INFO : Ignition 2.22.0 Jan 20 01:38:59.383209 ignition[1038]: INFO : Stage: mount Jan 20 01:38:59.383209 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:38:59.383209 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:38:59.440828 ignition[1038]: INFO : mount: mount passed Jan 20 01:38:59.440828 ignition[1038]: INFO : Ignition finished successfully Jan 20 01:38:59.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:59.424183 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 01:38:59.551032 kernel: audit: type=1130 audit(1768873139.491:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:38:59.500447 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 01:38:59.717839 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 01:38:59.845512 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1051) Jan 20 01:38:59.889538 kernel: BTRFS info (device vda6): first mount of filesystem dd813f25-deee-45c4-bcdc-1fa4787873d8 Jan 20 01:38:59.889668 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 01:38:59.944840 kernel: BTRFS info (device vda6): turning on async discard Jan 20 01:38:59.944927 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 01:38:59.961869 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 01:39:00.280886 ignition[1067]: INFO : Ignition 2.22.0 Jan 20 01:39:00.299084 ignition[1067]: INFO : Stage: files Jan 20 01:39:00.323645 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:39:00.323645 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:39:00.551663 ignition[1067]: DEBUG : files: compiled without relabeling support, skipping Jan 20 01:39:00.574366 ignition[1067]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 01:39:00.574366 ignition[1067]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 01:39:00.645811 ignition[1067]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 01:39:00.671951 ignition[1067]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 01:39:00.695339 ignition[1067]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 01:39:00.678831 unknown[1067]: wrote ssh authorized keys file for user: core Jan 20 01:39:00.732339 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 01:39:00.762320 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 20 01:39:00.930812 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 01:39:01.422781 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 01:39:01.422781 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 01:39:01.422781 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 01:39:01.422781 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 01:39:01.552552 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 01:39:01.552552 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 01:39:01.552552 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 01:39:01.552552 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 01:39:01.552552 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 01:39:01.552552 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 01:39:01.552552 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 01:39:01.552552 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 01:39:01.552552 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 01:39:01.552552 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 01:39:01.552552 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 20 01:39:04.418797 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 01:39:26.337661 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 01:39:26.337661 ignition[1067]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 01:39:26.424540 ignition[1067]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 01:39:26.424540 ignition[1067]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 01:39:26.424540 ignition[1067]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 01:39:26.424540 ignition[1067]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 20 01:39:26.424540 ignition[1067]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 01:39:26.424540 ignition[1067]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 01:39:26.424540 ignition[1067]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 20 01:39:26.424540 ignition[1067]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 20 01:39:27.078466 ignition[1067]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 01:39:27.416494 ignition[1067]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 01:39:27.482065 ignition[1067]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 20 01:39:27.482065 ignition[1067]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 20 01:39:27.482065 ignition[1067]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 01:39:27.482065 ignition[1067]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 01:39:27.482065 ignition[1067]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 01:39:27.482065 ignition[1067]: INFO : files: files passed Jan 20 01:39:27.482065 ignition[1067]: INFO : Ignition finished successfully Jan 20 01:39:27.929766 kernel: audit: type=1130 audit(1768873167.811:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:27.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:27.645441 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 01:39:27.926514 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 01:39:28.128967 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 01:39:28.425563 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 01:39:28.447489 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 01:39:28.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:28.538866 initrd-setup-root-after-ignition[1098]: grep: /sysroot/oem/oem-release: No such file or directory Jan 20 01:39:28.691780 kernel: audit: type=1130 audit(1768873168.529:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:28.691853 kernel: audit: type=1131 audit(1768873168.529:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:28.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:28.693250 initrd-setup-root-after-ignition[1101]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:39:28.693250 initrd-setup-root-after-ignition[1101]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:39:28.767355 initrd-setup-root-after-ignition[1105]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:39:28.760869 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 01:39:28.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:28.837034 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 01:39:28.906340 kernel: audit: type=1130 audit(1768873168.825:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:28.906035 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 01:39:29.599921 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 01:39:29.636510 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 01:39:29.841536 kernel: audit: type=1130 audit(1768873169.680:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:29.841581 kernel: audit: type=1131 audit(1768873169.680:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:29.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:29.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:29.680825 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 01:39:29.790607 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 01:39:29.932747 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 01:39:29.971422 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 01:39:30.383798 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 01:39:30.812214 kernel: audit: type=1130 audit(1768873170.402:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:30.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:30.432449 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 01:39:31.163787 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 01:39:31.243780 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:39:31.283850 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:39:31.421571 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 01:39:31.486474 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 01:39:31.489460 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 01:39:31.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:31.688412 kernel: audit: type=1131 audit(1768873171.618:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:31.689793 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 01:39:31.745922 systemd[1]: Stopped target basic.target - Basic System. Jan 20 01:39:31.776246 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 01:39:31.826235 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 01:39:32.274542 kernel: audit: type=1131 audit(1768873172.165:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:32.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:31.826512 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 01:39:31.826655 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 20 01:39:31.826854 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 01:39:31.827062 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 01:39:31.827361 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 01:39:31.827511 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 01:39:31.827640 systemd[1]: Stopped target swap.target - Swaps. Jan 20 01:39:31.827747 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 01:39:31.838507 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 01:39:32.275849 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:39:32.438570 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:39:32.644585 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 01:39:32.649434 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:39:32.738377 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 01:39:32.738760 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 01:39:32.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:32.936275 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 01:39:33.136467 kernel: audit: type=1131 audit(1768873172.927:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:33.136519 kernel: audit: type=1131 audit(1768873173.050:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:33.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:32.937705 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 01:39:33.051938 systemd[1]: Stopped target paths.target - Path Units. Jan 20 01:39:33.116049 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 01:39:33.183354 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:39:33.281699 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 01:39:33.325089 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 01:39:33.575767 kernel: audit: type=1131 audit(1768873173.357:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:33.580306 kernel: audit: type=1131 audit(1768873173.357:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:33.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:33.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:33.347744 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 01:39:33.347909 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 01:39:33.355921 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 01:39:33.356411 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 01:39:33.356600 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 20 01:39:33.356710 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 20 01:39:33.356941 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 01:39:33.357429 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 01:39:33.357629 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 01:39:33.357754 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 01:39:33.375547 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 01:39:33.597476 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 01:39:33.603608 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:39:33.889534 kernel: audit: type=1131 audit(1768873173.808:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:33.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:33.906642 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 01:39:34.096340 kernel: audit: type=1131 audit(1768873173.971:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:34.096383 kernel: audit: type=1131 audit(1768873174.030:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:33.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:34.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:33.936480 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 01:39:33.936927 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:39:33.971865 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 01:39:33.972298 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:39:34.038080 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 01:39:34.038529 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 01:39:34.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:34.241793 kernel: audit: type=1131 audit(1768873174.208:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:34.298700 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 01:39:34.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:34.298919 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 01:39:34.520787 kernel: audit: type=1130 audit(1768873174.328:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:34.520838 kernel: audit: type=1131 audit(1768873174.328:55): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:34.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:34.574632 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 01:39:34.674699 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 01:39:34.679881 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 01:39:34.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:34.851179 ignition[1125]: INFO : Ignition 2.22.0 Jan 20 01:39:34.851179 ignition[1125]: INFO : Stage: umount Jan 20 01:39:34.890712 ignition[1125]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:39:34.890712 ignition[1125]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:39:34.890712 ignition[1125]: INFO : umount: umount passed Jan 20 01:39:34.890712 ignition[1125]: INFO : Ignition finished successfully Jan 20 01:39:34.921611 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 01:39:34.921869 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 01:39:35.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:35.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:35.041648 systemd[1]: Stopped target network.target - Network. Jan 20 01:39:35.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:35.054773 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 01:39:35.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:35.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:35.054925 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 01:39:35.079778 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 01:39:35.079907 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 01:39:35.103892 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 01:39:35.107232 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 01:39:35.143368 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 01:39:35.143541 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 01:39:35.201382 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 01:39:35.201602 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 01:39:35.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:35.385331 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 01:39:35.403573 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 01:39:35.454412 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 01:39:35.454662 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 01:39:35.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:35.576807 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 01:39:35.581811 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 01:39:35.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:35.726000 audit: BPF prog-id=9 op=UNLOAD Jan 20 01:39:35.729000 audit: BPF prog-id=6 op=UNLOAD Jan 20 01:39:35.737578 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 20 01:39:35.785869 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 01:39:35.785981 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:39:35.869619 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 01:39:35.966840 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 01:39:35.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:36.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:35.967075 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 01:39:36.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:35.999638 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 01:39:35.999769 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:39:36.032586 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 01:39:36.032711 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 01:39:36.115903 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:39:36.294510 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 01:39:36.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:36.316887 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:39:36.389196 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 01:39:36.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:36.389363 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 01:39:36.474591 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 01:39:36.474680 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:39:36.497330 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 01:39:36.497476 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 01:39:36.709848 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 01:39:36.713766 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 01:39:36.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:36.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:36.758564 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 01:39:36.758764 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:39:36.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:36.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:36.815427 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 01:39:37.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:36.876386 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 20 01:39:36.876555 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 01:39:36.906891 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 01:39:36.907064 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:39:36.964004 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:39:36.965855 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:39:37.067735 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 01:39:37.112711 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 01:39:37.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:37.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:37.319543 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 01:39:37.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:37.319805 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 01:39:37.387300 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 01:39:37.469574 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 01:39:37.604983 systemd[1]: Switching root. Jan 20 01:39:37.776488 systemd-journald[320]: Journal stopped Jan 20 01:39:49.402651 systemd-journald[320]: Received SIGTERM from PID 1 (systemd). Jan 20 01:39:49.402848 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 01:39:49.402875 kernel: SELinux: policy capability open_perms=1 Jan 20 01:39:49.403029 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 01:39:49.403066 kernel: SELinux: policy capability always_check_network=0 Jan 20 01:39:49.403088 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 01:39:49.408344 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 01:39:49.408377 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 01:39:49.408396 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 01:39:49.408413 kernel: SELinux: policy capability userspace_initial_context=0 Jan 20 01:39:49.408598 kernel: kauditd_printk_skb: 25 callbacks suppressed Jan 20 01:39:49.408625 kernel: audit: type=1403 audit(1768873178.787:81): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 20 01:39:49.408656 systemd[1]: Successfully loaded SELinux policy in 432.135ms. Jan 20 01:39:49.408677 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 57.869ms. Jan 20 01:39:49.408698 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 01:39:49.408716 systemd[1]: Detected virtualization kvm. Jan 20 01:39:49.408734 systemd[1]: Detected architecture x86-64. Jan 20 01:39:49.408795 systemd[1]: Detected first boot. Jan 20 01:39:49.408814 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 20 01:39:49.408832 kernel: audit: type=1334 audit(1768873179.602:82): prog-id=10 op=LOAD Jan 20 01:39:49.408849 kernel: audit: type=1334 audit(1768873179.609:83): prog-id=10 op=UNLOAD Jan 20 01:39:49.408867 kernel: audit: type=1334 audit(1768873179.609:84): prog-id=11 op=LOAD Jan 20 01:39:49.408889 kernel: audit: type=1334 audit(1768873179.609:85): prog-id=11 op=UNLOAD Jan 20 01:39:49.408907 zram_generator::config[1171]: No configuration found. Jan 20 01:39:49.408965 kernel: Guest personality initialized and is inactive Jan 20 01:39:49.408983 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 20 01:39:49.409001 kernel: Initialized host personality Jan 20 01:39:49.409017 kernel: NET: Registered PF_VSOCK protocol family Jan 20 01:39:49.409035 systemd[1]: Populated /etc with preset unit settings. Jan 20 01:39:49.409164 kernel: audit: type=1334 audit(1768873184.276:86): prog-id=12 op=LOAD Jan 20 01:39:49.411398 kernel: audit: type=1334 audit(1768873184.277:87): prog-id=3 op=UNLOAD Jan 20 01:39:49.411428 kernel: audit: type=1334 audit(1768873184.280:88): prog-id=13 op=LOAD Jan 20 01:39:49.411456 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 01:39:49.411479 kernel: audit: type=1334 audit(1768873184.283:89): prog-id=14 op=LOAD Jan 20 01:39:49.411500 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 01:39:49.411519 kernel: audit: type=1334 audit(1768873184.283:90): prog-id=4 op=UNLOAD Jan 20 01:39:49.411538 kernel: audit: type=1334 audit(1768873184.283:91): prog-id=5 op=UNLOAD Jan 20 01:39:49.411663 kernel: audit: type=1131 audit(1768873184.314:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:49.411683 kernel: audit: type=1334 audit(1768873184.637:93): prog-id=12 op=UNLOAD Jan 20 01:39:49.411704 kernel: audit: type=1130 audit(1768873184.702:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:49.411722 kernel: audit: type=1131 audit(1768873184.702:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:49.411787 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 01:39:49.411880 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 01:39:49.411914 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 01:39:49.411934 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 01:39:49.411956 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 01:39:49.411977 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 01:39:49.411996 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 01:39:49.412015 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 01:39:49.412222 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 01:39:49.412250 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:39:49.412269 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:39:49.412286 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 01:39:49.412306 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 01:39:49.412329 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 01:39:49.412348 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 01:39:49.412428 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 01:39:49.412449 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:39:49.412468 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:39:49.412487 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 01:39:49.412510 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 01:39:49.412528 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 01:39:49.412545 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 01:39:49.412625 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:39:49.412651 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 01:39:49.412674 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 20 01:39:49.412692 systemd[1]: Reached target slices.target - Slice Units. Jan 20 01:39:49.412709 systemd[1]: Reached target swap.target - Swaps. Jan 20 01:39:49.412728 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 01:39:49.412750 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 01:39:49.412768 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 20 01:39:49.412849 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 01:39:49.412870 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 20 01:39:49.412892 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:39:49.412911 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 20 01:39:49.412928 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 20 01:39:49.412948 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 01:39:49.412966 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:39:49.413046 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 01:39:49.413070 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 01:39:49.413089 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 01:39:49.413247 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 01:39:49.413268 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:39:49.413290 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 01:39:49.413309 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 01:39:49.413393 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 01:39:49.413418 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 01:39:49.413436 systemd[1]: Reached target machines.target - Containers. Jan 20 01:39:49.413457 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 01:39:49.413476 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:39:49.413497 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 01:39:49.413581 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 01:39:49.413604 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:39:49.413623 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 01:39:49.413644 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:39:49.413664 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 01:39:49.413681 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:39:49.413700 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 01:39:49.413785 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 01:39:49.413945 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 01:39:49.414026 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 01:39:49.414051 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 01:39:49.424485 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 01:39:49.424525 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 01:39:49.424547 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 01:39:49.424627 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 01:39:49.424656 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 01:39:49.424677 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 20 01:39:49.424697 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 01:39:49.424718 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:39:49.424788 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 01:39:49.424810 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 01:39:49.424886 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 01:39:49.424911 kernel: fuse: init (API version 7.41) Jan 20 01:39:49.424932 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 01:39:49.425005 systemd-journald[1257]: Collecting audit messages is enabled. Jan 20 01:39:49.425230 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 01:39:49.425263 kernel: kauditd_printk_skb: 8 callbacks suppressed Jan 20 01:39:49.425346 kernel: audit: type=1305 audit(1768873189.382:104): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 20 01:39:49.425370 kernel: audit: type=1300 audit(1768873189.382:104): arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffd6ed96370 a2=4000 a3=0 items=0 ppid=1 pid=1257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:39:49.425389 kernel: audit: type=1327 audit(1768873189.382:104): proctitle="/usr/lib/systemd/systemd-journald" Jan 20 01:39:49.425459 systemd-journald[1257]: Journal started Jan 20 01:39:49.425495 systemd-journald[1257]: Runtime Journal (/run/log/journal/f6309e0c6bf5444b960ee69bfabc74cd) is 6M, max 48.2M, 42.2M free. Jan 20 01:39:46.319000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 20 01:39:48.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:48.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:48.118000 audit: BPF prog-id=14 op=UNLOAD Jan 20 01:39:48.118000 audit: BPF prog-id=13 op=UNLOAD Jan 20 01:39:48.150000 audit: BPF prog-id=15 op=LOAD Jan 20 01:39:48.169000 audit: BPF prog-id=16 op=LOAD Jan 20 01:39:48.176000 audit: BPF prog-id=17 op=LOAD Jan 20 01:39:49.382000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 20 01:39:49.382000 audit[1257]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffd6ed96370 a2=4000 a3=0 items=0 ppid=1 pid=1257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:39:49.382000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 20 01:39:44.177744 systemd[1]: Queued start job for default target multi-user.target. Jan 20 01:39:44.291972 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 01:39:44.307954 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 01:39:44.314584 systemd[1]: systemd-journald.service: Consumed 3.167s CPU time. Jan 20 01:39:49.708457 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 01:39:49.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:49.805723 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 01:39:49.823297 kernel: audit: type=1130 audit(1768873189.764:105): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:49.845829 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 01:39:49.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:49.898070 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:39:49.986357 kernel: audit: type=1130 audit(1768873189.887:106): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:50.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:50.013750 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 01:39:50.027410 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 01:39:50.103922 kernel: audit: type=1130 audit(1768873190.009:107): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:50.214653 kernel: audit: type=1130 audit(1768873190.136:108): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:50.216360 kernel: audit: type=1131 audit(1768873190.136:109): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:50.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:50.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:50.146855 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:39:50.152569 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:39:50.301516 kernel: audit: type=1130 audit(1768873190.294:110): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:50.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:50.300051 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:39:50.300580 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:39:50.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:50.365828 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:39:50.366342 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:39:50.371049 kernel: audit: type=1131 audit(1768873190.294:111): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:50.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:50.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:50.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:50.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:50.400243 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 01:39:50.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:50.431282 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 01:39:50.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:50.471440 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 01:39:50.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:50.504804 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 20 01:39:50.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:50.566555 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:39:50.596765 kernel: ACPI: bus type drm_connector registered Jan 20 01:39:50.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:50.705045 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 01:39:50.714959 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 01:39:50.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:50.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:50.752576 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 01:39:50.753047 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 01:39:50.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:50.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:50.810834 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 01:39:50.845671 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 20 01:39:50.869652 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 01:39:50.914888 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 01:39:50.966078 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 01:39:50.966446 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 01:39:51.032664 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 20 01:39:51.059925 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:39:51.060724 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 01:39:51.114628 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 01:39:51.219380 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 01:39:51.250684 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 01:39:51.300452 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 01:39:51.344892 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 01:39:51.387567 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 01:39:51.504183 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 01:39:51.597587 systemd-journald[1257]: Time spent on flushing to /var/log/journal/f6309e0c6bf5444b960ee69bfabc74cd is 502.830ms for 1158 entries. Jan 20 01:39:51.597587 systemd-journald[1257]: System Journal (/var/log/journal/f6309e0c6bf5444b960ee69bfabc74cd) is 8M, max 163.5M, 155.5M free. Jan 20 01:39:52.436260 systemd-journald[1257]: Received client request to flush runtime journal. Jan 20 01:39:52.436394 kernel: loop1: detected capacity change from 0 to 111544 Jan 20 01:39:52.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:51.624857 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 01:39:51.688692 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 01:39:51.847674 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 01:39:52.253413 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 01:39:52.307550 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 01:39:52.405669 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 20 01:39:52.506330 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 01:39:52.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:52.618756 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:39:52.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:52.753593 kernel: loop2: detected capacity change from 0 to 119256 Jan 20 01:39:52.751705 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 01:39:52.762933 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 20 01:39:52.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:52.929871 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 01:39:52.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:53.086000 audit: BPF prog-id=18 op=LOAD Jan 20 01:39:53.097000 audit: BPF prog-id=19 op=LOAD Jan 20 01:39:53.097000 audit: BPF prog-id=20 op=LOAD Jan 20 01:39:53.154932 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 20 01:39:53.198489 kernel: loop3: detected capacity change from 0 to 229808 Jan 20 01:39:53.604000 audit: BPF prog-id=21 op=LOAD Jan 20 01:39:53.712403 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 01:39:53.820566 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 01:39:53.933000 audit: BPF prog-id=22 op=LOAD Jan 20 01:39:53.941000 audit: BPF prog-id=23 op=LOAD Jan 20 01:39:53.941000 audit: BPF prog-id=24 op=LOAD Jan 20 01:39:53.951735 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 20 01:39:54.032530 kernel: loop4: detected capacity change from 0 to 111544 Jan 20 01:39:54.036000 audit: BPF prog-id=25 op=LOAD Jan 20 01:39:54.036000 audit: BPF prog-id=26 op=LOAD Jan 20 01:39:54.036000 audit: BPF prog-id=27 op=LOAD Jan 20 01:39:54.129829 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 01:39:54.403768 kernel: loop5: detected capacity change from 0 to 119256 Jan 20 01:39:54.498568 systemd-tmpfiles[1314]: ACLs are not supported, ignoring. Jan 20 01:39:54.498597 systemd-tmpfiles[1314]: ACLs are not supported, ignoring. Jan 20 01:39:54.712570 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:39:54.722279 kernel: loop6: detected capacity change from 0 to 229808 Jan 20 01:39:54.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:54.779602 kernel: kauditd_printk_skb: 28 callbacks suppressed Jan 20 01:39:54.779723 kernel: audit: type=1130 audit(1768873194.761:140): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:39:55.170576 (sd-merge)[1315]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 20 01:39:55.218927 (sd-merge)[1315]: Merged extensions into '/usr'. Jan 20 01:39:55.649986 systemd[1]: Reload requested from client PID 1292 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 01:39:55.653193 systemd[1]: Reloading... Jan 20 01:39:55.997847 systemd-nsresourced[1316]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 20 01:39:56.855246 zram_generator::config[1356]: No configuration found. Jan 20 01:39:59.467733 systemd-oomd[1310]: No swap; memory pressure usage will be degraded Jan 20 01:39:59.915278 systemd-resolved[1313]: Positive Trust Anchors: Jan 20 01:39:59.915300 systemd-resolved[1313]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 01:39:59.918439 systemd-resolved[1313]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 01:39:59.918493 systemd-resolved[1313]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 01:40:00.001529 systemd-resolved[1313]: Defaulting to hostname 'linux'. Jan 20 01:40:01.534785 systemd[1]: Reloading finished in 5816 ms. Jan 20 01:40:02.077580 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 20 01:40:02.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:40:02.100568 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 01:40:02.124197 kernel: audit: type=1130 audit(1768873202.094:141): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:40:02.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:40:02.147277 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 20 01:40:02.206193 kernel: audit: type=1130 audit(1768873202.133:142): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:40:02.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:40:02.220973 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 01:40:02.251216 kernel: audit: type=1130 audit(1768873202.212:143): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:40:02.303022 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 01:40:02.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:40:02.345666 kernel: audit: type=1130 audit(1768873202.295:144): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:40:02.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:40:02.423012 kernel: audit: type=1130 audit(1768873202.372:145): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:40:02.459707 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:40:02.771608 systemd[1]: Starting ensure-sysext.service... Jan 20 01:40:02.804849 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 01:40:02.841191 kernel: audit: type=1334 audit(1768873202.823:146): prog-id=28 op=LOAD Jan 20 01:40:02.823000 audit: BPF prog-id=28 op=LOAD Jan 20 01:40:02.823000 audit: BPF prog-id=25 op=UNLOAD Jan 20 01:40:02.869966 kernel: audit: type=1334 audit(1768873202.823:147): prog-id=25 op=UNLOAD Jan 20 01:40:02.872549 kernel: audit: type=1334 audit(1768873202.823:148): prog-id=29 op=LOAD Jan 20 01:40:02.823000 audit: BPF prog-id=29 op=LOAD Jan 20 01:40:02.823000 audit: BPF prog-id=30 op=LOAD Jan 20 01:40:02.824000 audit: BPF prog-id=26 op=UNLOAD Jan 20 01:40:02.904617 kernel: audit: type=1334 audit(1768873202.823:149): prog-id=30 op=LOAD Jan 20 01:40:02.904779 kernel: audit: type=1334 audit(1768873202.824:150): prog-id=26 op=UNLOAD Jan 20 01:40:02.824000 audit: BPF prog-id=27 op=UNLOAD Jan 20 01:40:02.826000 audit: BPF prog-id=31 op=LOAD Jan 20 01:40:02.826000 audit: BPF prog-id=15 op=UNLOAD Jan 20 01:40:02.839000 audit: BPF prog-id=32 op=LOAD Jan 20 01:40:02.839000 audit: BPF prog-id=33 op=LOAD Jan 20 01:40:02.839000 audit: BPF prog-id=16 op=UNLOAD Jan 20 01:40:02.839000 audit: BPF prog-id=17 op=UNLOAD Jan 20 01:40:02.840000 audit: BPF prog-id=34 op=LOAD Jan 20 01:40:02.840000 audit: BPF prog-id=21 op=UNLOAD Jan 20 01:40:02.853000 audit: BPF prog-id=35 op=LOAD Jan 20 01:40:02.853000 audit: BPF prog-id=22 op=UNLOAD Jan 20 01:40:02.853000 audit: BPF prog-id=36 op=LOAD Jan 20 01:40:02.853000 audit: BPF prog-id=37 op=LOAD Jan 20 01:40:02.853000 audit: BPF prog-id=23 op=UNLOAD Jan 20 01:40:02.853000 audit: BPF prog-id=24 op=UNLOAD Jan 20 01:40:02.860000 audit: BPF prog-id=38 op=LOAD Jan 20 01:40:02.860000 audit: BPF prog-id=18 op=UNLOAD Jan 20 01:40:02.861000 audit: BPF prog-id=39 op=LOAD Jan 20 01:40:02.861000 audit: BPF prog-id=40 op=LOAD Jan 20 01:40:02.861000 audit: BPF prog-id=19 op=UNLOAD Jan 20 01:40:02.861000 audit: BPF prog-id=20 op=UNLOAD Jan 20 01:40:03.005410 systemd[1]: Reload requested from client PID 1397 ('systemctl') (unit ensure-sysext.service)... Jan 20 01:40:03.010258 systemd[1]: Reloading... Jan 20 01:40:03.110306 systemd-tmpfiles[1398]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 01:40:03.110646 systemd-tmpfiles[1398]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 01:40:03.111459 systemd-tmpfiles[1398]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 01:40:03.145654 systemd-tmpfiles[1398]: ACLs are not supported, ignoring. Jan 20 01:40:03.145771 systemd-tmpfiles[1398]: ACLs are not supported, ignoring. Jan 20 01:40:03.201938 systemd-tmpfiles[1398]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 01:40:03.203267 systemd-tmpfiles[1398]: Skipping /boot Jan 20 01:40:03.396630 zram_generator::config[1427]: No configuration found. Jan 20 01:40:03.399924 systemd-tmpfiles[1398]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 01:40:03.399992 systemd-tmpfiles[1398]: Skipping /boot Jan 20 01:40:04.552833 systemd[1]: Reloading finished in 1541 ms. Jan 20 01:40:04.635285 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 01:40:04.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:40:04.669000 audit: BPF prog-id=41 op=LOAD Jan 20 01:40:04.670000 audit: BPF prog-id=34 op=UNLOAD Jan 20 01:40:04.697000 audit: BPF prog-id=42 op=LOAD Jan 20 01:40:04.697000 audit: BPF prog-id=28 op=UNLOAD Jan 20 01:40:04.698000 audit: BPF prog-id=43 op=LOAD Jan 20 01:40:04.698000 audit: BPF prog-id=44 op=LOAD Jan 20 01:40:04.698000 audit: BPF prog-id=29 op=UNLOAD Jan 20 01:40:04.698000 audit: BPF prog-id=30 op=UNLOAD Jan 20 01:40:04.699000 audit: BPF prog-id=45 op=LOAD Jan 20 01:40:04.699000 audit: BPF prog-id=35 op=UNLOAD Jan 20 01:40:04.700000 audit: BPF prog-id=46 op=LOAD Jan 20 01:40:04.700000 audit: BPF prog-id=47 op=LOAD Jan 20 01:40:04.700000 audit: BPF prog-id=36 op=UNLOAD Jan 20 01:40:04.705000 audit: BPF prog-id=37 op=UNLOAD Jan 20 01:40:04.711000 audit: BPF prog-id=48 op=LOAD Jan 20 01:40:04.726000 audit: BPF prog-id=38 op=UNLOAD Jan 20 01:40:04.730000 audit: BPF prog-id=49 op=LOAD Jan 20 01:40:04.730000 audit: BPF prog-id=50 op=LOAD Jan 20 01:40:04.730000 audit: BPF prog-id=39 op=UNLOAD Jan 20 01:40:04.730000 audit: BPF prog-id=40 op=UNLOAD Jan 20 01:40:04.741000 audit: BPF prog-id=51 op=LOAD Jan 20 01:40:04.742000 audit: BPF prog-id=31 op=UNLOAD Jan 20 01:40:04.747000 audit: BPF prog-id=52 op=LOAD Jan 20 01:40:04.747000 audit: BPF prog-id=53 op=LOAD Jan 20 01:40:04.747000 audit: BPF prog-id=32 op=UNLOAD Jan 20 01:40:04.750000 audit: BPF prog-id=33 op=UNLOAD Jan 20 01:40:04.781322 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:40:04.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:40:05.031333 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 01:40:05.088671 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 01:40:05.102171 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 01:40:05.205638 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 01:40:05.269000 audit: BPF prog-id=8 op=UNLOAD Jan 20 01:40:05.269000 audit: BPF prog-id=7 op=UNLOAD Jan 20 01:40:05.293000 audit: BPF prog-id=54 op=LOAD Jan 20 01:40:05.293000 audit: BPF prog-id=55 op=LOAD Jan 20 01:40:05.334290 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:40:05.388699 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 01:40:05.436671 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:40:05.438722 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:40:05.452297 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:40:05.504808 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:40:05.567347 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:40:05.603000 audit[1480]: SYSTEM_BOOT pid=1480 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 20 01:40:05.598010 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:40:05.598670 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 01:40:05.616595 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 01:40:05.616765 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:40:05.827622 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:40:05.828362 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:40:05.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:40:05.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:40:05.888336 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:40:05.894833 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:40:05.914059 systemd-udevd[1474]: Using default interface naming scheme 'v257'. Jan 20 01:40:05.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:40:05.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:40:05.936764 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 01:40:05.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:40:06.018589 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 01:40:06.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:40:06.039905 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:40:06.044900 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:40:06.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:40:06.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:40:06.167902 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:40:06.169605 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:40:06.175612 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:40:06.208334 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 01:40:06.243859 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:40:06.267285 augenrules[1504]: No rules Jan 20 01:40:06.264000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 20 01:40:06.264000 audit[1504]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdef977fa0 a2=420 a3=0 items=0 ppid=1469 pid=1504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:40:06.264000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 01:40:06.289212 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:40:06.307739 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:40:06.308307 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 01:40:06.308557 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 01:40:06.308748 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:40:06.311758 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 01:40:06.312299 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 01:40:06.322847 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:40:06.323324 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:40:06.359282 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:40:06.392055 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 01:40:06.392669 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 01:40:06.414968 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:40:06.421014 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:40:06.454800 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 01:40:06.488695 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:40:06.489344 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:40:06.547083 systemd[1]: Finished ensure-sysext.service. Jan 20 01:40:06.661292 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 01:40:06.690792 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 01:40:06.690936 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 01:40:06.779819 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 01:40:06.804833 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 01:40:06.843811 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 01:40:08.758783 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 01:40:08.849408 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 20 01:40:09.167757 kernel: ACPI: button: Power Button [PWRF] Jan 20 01:40:09.640721 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 01:40:09.641740 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 01:40:09.749190 systemd-networkd[1534]: lo: Link UP Jan 20 01:40:09.749824 systemd-networkd[1534]: lo: Gained carrier Jan 20 01:40:09.759402 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 01:40:09.783062 systemd-networkd[1534]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 01:40:09.788327 systemd-networkd[1534]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 01:40:09.789661 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 01:40:09.799560 systemd[1]: Reached target network.target - Network. Jan 20 01:40:09.808953 systemd-networkd[1534]: eth0: Link UP Jan 20 01:40:09.815921 systemd-networkd[1534]: eth0: Gained carrier Jan 20 01:40:09.816183 systemd-networkd[1534]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 01:40:09.817393 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 01:40:09.874711 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 20 01:40:10.304617 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 01:40:10.875716 systemd-networkd[1534]: eth0: DHCPv4 address 10.0.0.44/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 01:40:10.912170 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 01:40:11.080237 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 01:40:11.093940 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 01:40:12.110907 systemd-resolved[1313]: Clock change detected. Flushing caches. Jan 20 01:40:12.111300 systemd-timesyncd[1536]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 20 01:40:12.111469 systemd-timesyncd[1536]: Initial clock synchronization to Tue 2026-01-20 01:40:12.067562 UTC. Jan 20 01:40:12.209319 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 20 01:40:12.541909 systemd-networkd[1534]: eth0: Gained IPv6LL Jan 20 01:40:12.579340 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 01:40:12.629021 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 01:40:12.887828 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:40:14.736212 ldconfig[1471]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 01:40:14.782517 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 01:40:15.024525 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 01:40:15.074994 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:40:15.191242 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 01:40:15.238972 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 01:40:15.299355 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 01:40:15.322226 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 01:40:15.426214 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 20 01:40:15.471794 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 01:40:15.511104 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 01:40:15.538910 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 20 01:40:15.574809 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 20 01:40:15.609590 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 01:40:15.683472 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 01:40:15.693554 systemd[1]: Reached target paths.target - Path Units. Jan 20 01:40:15.724280 systemd[1]: Reached target timers.target - Timer Units. Jan 20 01:40:15.799270 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 01:40:15.867244 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 01:40:16.011669 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 20 01:40:16.048229 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 20 01:40:16.075916 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 20 01:40:16.249942 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 01:40:16.280007 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 20 01:40:16.307643 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 01:40:16.338735 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 01:40:16.363799 systemd[1]: Reached target basic.target - Basic System. Jan 20 01:40:16.541552 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 01:40:16.541797 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 01:40:16.689948 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 01:40:17.043176 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 01:40:17.124586 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 01:40:17.187639 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 01:40:17.285984 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 01:40:17.363209 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 01:40:17.482883 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 01:40:17.580078 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 20 01:40:17.618576 jq[1594]: false Jan 20 01:40:17.658298 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:40:17.720597 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 01:40:17.773890 extend-filesystems[1595]: Found /dev/vda6 Jan 20 01:40:17.868196 extend-filesystems[1595]: Found /dev/vda9 Jan 20 01:40:17.801537 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 01:40:18.016913 extend-filesystems[1595]: Checking size of /dev/vda9 Jan 20 01:40:17.903080 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 01:40:18.164233 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 01:40:18.384799 google_oslogin_nss_cache[1596]: oslogin_cache_refresh[1596]: Refreshing passwd entry cache Jan 20 01:40:18.383065 oslogin_cache_refresh[1596]: Refreshing passwd entry cache Jan 20 01:40:18.415555 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 01:40:18.488768 extend-filesystems[1595]: Resized partition /dev/vda9 Jan 20 01:40:18.618663 google_oslogin_nss_cache[1596]: oslogin_cache_refresh[1596]: Failure getting users, quitting Jan 20 01:40:18.617070 oslogin_cache_refresh[1596]: Failure getting users, quitting Jan 20 01:40:18.618922 extend-filesystems[1618]: resize2fs 1.47.3 (8-Jul-2025) Jan 20 01:40:18.700489 google_oslogin_nss_cache[1596]: oslogin_cache_refresh[1596]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 01:40:18.700489 google_oslogin_nss_cache[1596]: oslogin_cache_refresh[1596]: Refreshing group entry cache Jan 20 01:40:18.620265 oslogin_cache_refresh[1596]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 01:40:18.620353 oslogin_cache_refresh[1596]: Refreshing group entry cache Jan 20 01:40:18.704528 google_oslogin_nss_cache[1596]: oslogin_cache_refresh[1596]: Failure getting groups, quitting Jan 20 01:40:18.704528 google_oslogin_nss_cache[1596]: oslogin_cache_refresh[1596]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 01:40:18.704377 oslogin_cache_refresh[1596]: Failure getting groups, quitting Jan 20 01:40:18.704413 oslogin_cache_refresh[1596]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 01:40:18.738318 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 20 01:40:18.795597 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 01:40:18.839287 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 01:40:18.844990 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 01:40:18.872326 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 01:40:18.923440 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 01:40:19.125971 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 01:40:19.165636 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 01:40:19.169299 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 01:40:19.172945 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 20 01:40:19.176808 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 20 01:40:19.214551 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 01:40:19.215366 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 01:40:19.571824 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 01:40:19.650358 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 01:40:19.695373 jq[1626]: true Jan 20 01:40:19.711280 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 01:40:20.184767 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 20 01:40:20.381639 extend-filesystems[1618]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 01:40:20.381639 extend-filesystems[1618]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 20 01:40:20.381639 extend-filesystems[1618]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 20 01:40:20.588188 jq[1641]: true Jan 20 01:40:20.569079 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 01:40:20.664383 extend-filesystems[1595]: Resized filesystem in /dev/vda9 Jan 20 01:40:20.576943 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 01:40:21.906903 update_engine[1625]: I20260120 01:40:21.776053 1625 main.cc:92] Flatcar Update Engine starting Jan 20 01:40:20.644025 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 01:40:20.655595 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 01:40:22.587980 tar[1638]: linux-amd64/LICENSE Jan 20 01:40:22.588579 tar[1638]: linux-amd64/helm Jan 20 01:40:22.670563 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 01:40:22.686238 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 01:40:22.717785 systemd-logind[1623]: Watching system buttons on /dev/input/event2 (Power Button) Jan 20 01:40:22.721021 systemd-logind[1623]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 01:40:22.739296 systemd-logind[1623]: New seat seat0. Jan 20 01:40:22.750521 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 01:40:23.119756 bash[1680]: Updated "/home/core/.ssh/authorized_keys" Jan 20 01:40:23.150024 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 01:40:23.180836 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 01:40:23.187605 sshd_keygen[1632]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 01:40:23.923598 dbus-daemon[1592]: [system] SELinux support is enabled Jan 20 01:40:23.967809 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 01:40:24.023984 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 01:40:24.024047 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 01:40:24.158428 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 01:40:24.184545 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 01:40:24.319753 dbus-daemon[1592]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 20 01:40:24.338001 systemd[1]: Started update-engine.service - Update Engine. Jan 20 01:40:24.355564 update_engine[1625]: I20260120 01:40:24.342655 1625 update_check_scheduler.cc:74] Next update check in 9m46s Jan 20 01:40:24.411021 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 01:40:25.386639 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 01:40:25.457464 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 01:40:25.523329 systemd[1]: Started sshd@0-10.0.0.44:22-10.0.0.1:34910.service - OpenSSH per-connection server daemon (10.0.0.1:34910). Jan 20 01:40:27.655360 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 01:40:27.675396 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 01:40:28.010879 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 01:40:29.274844 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 01:40:29.412377 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 01:40:29.455963 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 01:40:29.463763 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 01:40:29.513150 locksmithd[1694]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 01:40:29.746506 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 34910 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 01:40:29.808263 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:40:30.106897 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 01:40:30.142528 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 01:40:30.203568 containerd[1643]: time="2026-01-20T01:40:30Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 20 01:40:30.364082 containerd[1643]: time="2026-01-20T01:40:30.351028352Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 20 01:40:30.389541 systemd-logind[1623]: New session 1 of user core. Jan 20 01:40:30.716040 containerd[1643]: time="2026-01-20T01:40:30.710899722Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="219.199µs" Jan 20 01:40:30.718773 containerd[1643]: time="2026-01-20T01:40:30.711090789Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 20 01:40:30.718773 containerd[1643]: time="2026-01-20T01:40:30.718403529Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 20 01:40:30.718773 containerd[1643]: time="2026-01-20T01:40:30.718453963Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 20 01:40:30.720895 containerd[1643]: time="2026-01-20T01:40:30.720860264Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 20 01:40:30.721658 containerd[1643]: time="2026-01-20T01:40:30.721159954Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 01:40:30.722588 containerd[1643]: time="2026-01-20T01:40:30.722559657Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 01:40:30.722800 containerd[1643]: time="2026-01-20T01:40:30.722768326Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 01:40:30.723353 containerd[1643]: time="2026-01-20T01:40:30.723320607Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 01:40:30.723448 containerd[1643]: time="2026-01-20T01:40:30.723425713Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 01:40:30.723565 containerd[1643]: time="2026-01-20T01:40:30.723541098Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 01:40:30.723654 containerd[1643]: time="2026-01-20T01:40:30.723630235Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 01:40:30.724086 containerd[1643]: time="2026-01-20T01:40:30.724061008Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 01:40:30.724158 containerd[1643]: time="2026-01-20T01:40:30.724142280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 20 01:40:30.724430 containerd[1643]: time="2026-01-20T01:40:30.724404139Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 20 01:40:30.725140 containerd[1643]: time="2026-01-20T01:40:30.725095109Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 01:40:30.735861 containerd[1643]: time="2026-01-20T01:40:30.735670418Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 01:40:30.736148 containerd[1643]: time="2026-01-20T01:40:30.736026433Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 20 01:40:30.736628 containerd[1643]: time="2026-01-20T01:40:30.736490559Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 20 01:40:30.764394 containerd[1643]: time="2026-01-20T01:40:30.761854642Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 20 01:40:30.764394 containerd[1643]: time="2026-01-20T01:40:30.762131399Z" level=info msg="metadata content store policy set" policy=shared Jan 20 01:40:31.040117 containerd[1643]: time="2026-01-20T01:40:31.022796487Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 20 01:40:31.043302 containerd[1643]: time="2026-01-20T01:40:31.041951491Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 01:40:31.047516 containerd[1643]: time="2026-01-20T01:40:31.044922586Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 01:40:31.047516 containerd[1643]: time="2026-01-20T01:40:31.044984853Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 20 01:40:31.047516 containerd[1643]: time="2026-01-20T01:40:31.045015630Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 20 01:40:31.047516 containerd[1643]: time="2026-01-20T01:40:31.045033975Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 20 01:40:31.047516 containerd[1643]: time="2026-01-20T01:40:31.045049583Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 20 01:40:31.047516 containerd[1643]: time="2026-01-20T01:40:31.045062147Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 20 01:40:31.047516 containerd[1643]: time="2026-01-20T01:40:31.045082145Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 20 01:40:31.047516 containerd[1643]: time="2026-01-20T01:40:31.045099687Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 20 01:40:31.047516 containerd[1643]: time="2026-01-20T01:40:31.045248846Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 20 01:40:31.047516 containerd[1643]: time="2026-01-20T01:40:31.045275225Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 20 01:40:31.047516 containerd[1643]: time="2026-01-20T01:40:31.045290744Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 20 01:40:31.047516 containerd[1643]: time="2026-01-20T01:40:31.045310941Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 20 01:40:31.053871 containerd[1643]: time="2026-01-20T01:40:31.051394857Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 20 01:40:31.053871 containerd[1643]: time="2026-01-20T01:40:31.051806546Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 20 01:40:31.053871 containerd[1643]: time="2026-01-20T01:40:31.052066902Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 20 01:40:31.056082 containerd[1643]: time="2026-01-20T01:40:31.052094914Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 20 01:40:31.056183 containerd[1643]: time="2026-01-20T01:40:31.056134524Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 20 01:40:31.070814 containerd[1643]: time="2026-01-20T01:40:31.056279515Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 20 01:40:31.070814 containerd[1643]: time="2026-01-20T01:40:31.056560609Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 20 01:40:31.070814 containerd[1643]: time="2026-01-20T01:40:31.056841604Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 20 01:40:31.070814 containerd[1643]: time="2026-01-20T01:40:31.056871740Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 20 01:40:31.070814 containerd[1643]: time="2026-01-20T01:40:31.057155840Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 20 01:40:31.070814 containerd[1643]: time="2026-01-20T01:40:31.057355563Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 20 01:40:31.070814 containerd[1643]: time="2026-01-20T01:40:31.057987492Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 20 01:40:31.070814 containerd[1643]: time="2026-01-20T01:40:31.058248931Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 20 01:40:31.070814 containerd[1643]: time="2026-01-20T01:40:31.058332316Z" level=info msg="Start snapshots syncer" Jan 20 01:40:31.070814 containerd[1643]: time="2026-01-20T01:40:31.058466597Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 20 01:40:31.071386 containerd[1643]: time="2026-01-20T01:40:31.062858595Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 20 01:40:31.071386 containerd[1643]: time="2026-01-20T01:40:31.063981751Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 20 01:40:31.072293 containerd[1643]: time="2026-01-20T01:40:31.064100042Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 20 01:40:31.072293 containerd[1643]: time="2026-01-20T01:40:31.065927212Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 20 01:40:31.072293 containerd[1643]: time="2026-01-20T01:40:31.065967938Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 20 01:40:31.072293 containerd[1643]: time="2026-01-20T01:40:31.065984820Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 20 01:40:31.072293 containerd[1643]: time="2026-01-20T01:40:31.066041716Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 20 01:40:31.072293 containerd[1643]: time="2026-01-20T01:40:31.066061273Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 20 01:40:31.072293 containerd[1643]: time="2026-01-20T01:40:31.066111416Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 20 01:40:31.072293 containerd[1643]: time="2026-01-20T01:40:31.066128739Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 20 01:40:31.072293 containerd[1643]: time="2026-01-20T01:40:31.066185184Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 20 01:40:31.072293 containerd[1643]: time="2026-01-20T01:40:31.066247470Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 20 01:40:31.072293 containerd[1643]: time="2026-01-20T01:40:31.066780896Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 01:40:31.072293 containerd[1643]: time="2026-01-20T01:40:31.066812956Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 01:40:31.072293 containerd[1643]: time="2026-01-20T01:40:31.066825699Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 01:40:31.122373 containerd[1643]: time="2026-01-20T01:40:31.066837742Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 01:40:31.122373 containerd[1643]: time="2026-01-20T01:40:31.066851057Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 20 01:40:31.122373 containerd[1643]: time="2026-01-20T01:40:31.066924033Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 20 01:40:31.122373 containerd[1643]: time="2026-01-20T01:40:31.066945624Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 20 01:40:31.122373 containerd[1643]: time="2026-01-20T01:40:31.066964087Z" level=info msg="runtime interface created" Jan 20 01:40:31.122373 containerd[1643]: time="2026-01-20T01:40:31.066971993Z" level=info msg="created NRI interface" Jan 20 01:40:31.122373 containerd[1643]: time="2026-01-20T01:40:31.067052332Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 20 01:40:31.122373 containerd[1643]: time="2026-01-20T01:40:31.067075195Z" level=info msg="Connect containerd service" Jan 20 01:40:31.122373 containerd[1643]: time="2026-01-20T01:40:31.067110381Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 01:40:31.122373 containerd[1643]: time="2026-01-20T01:40:31.070464672Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 01:40:31.138564 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 01:40:31.679763 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 01:40:32.180444 (systemd)[1720]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 20 01:40:32.425148 systemd-logind[1623]: New session c1 of user core. Jan 20 01:40:37.718074 containerd[1643]: time="2026-01-20T01:40:37.690143754Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 01:40:37.749311 containerd[1643]: time="2026-01-20T01:40:37.732878765Z" level=info msg="Start subscribing containerd event" Jan 20 01:40:37.749311 containerd[1643]: time="2026-01-20T01:40:37.733108674Z" level=info msg="Start recovering state" Jan 20 01:40:37.749311 containerd[1643]: time="2026-01-20T01:40:37.733538767Z" level=info msg="Start event monitor" Jan 20 01:40:37.749311 containerd[1643]: time="2026-01-20T01:40:37.733567300Z" level=info msg="Start cni network conf syncer for default" Jan 20 01:40:37.749311 containerd[1643]: time="2026-01-20T01:40:37.733584943Z" level=info msg="Start streaming server" Jan 20 01:40:37.749311 containerd[1643]: time="2026-01-20T01:40:37.733602576Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 20 01:40:37.749311 containerd[1643]: time="2026-01-20T01:40:37.733615981Z" level=info msg="runtime interface starting up..." Jan 20 01:40:37.749311 containerd[1643]: time="2026-01-20T01:40:37.733624627Z" level=info msg="starting plugins..." Jan 20 01:40:37.749311 containerd[1643]: time="2026-01-20T01:40:37.733654643Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 20 01:40:37.749311 containerd[1643]: time="2026-01-20T01:40:37.746772493Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 01:40:37.749311 containerd[1643]: time="2026-01-20T01:40:37.747021909Z" level=info msg="containerd successfully booted in 7.546146s" Jan 20 01:40:37.759948 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 01:40:38.100408 systemd[1720]: Queued start job for default target default.target. Jan 20 01:40:38.161637 systemd[1720]: Created slice app.slice - User Application Slice. Jan 20 01:40:38.162011 systemd[1720]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 20 01:40:38.162179 systemd[1720]: Reached target paths.target - Paths. Jan 20 01:40:38.171155 systemd[1720]: Reached target timers.target - Timers. Jan 20 01:40:38.203229 systemd[1720]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 01:40:38.346097 systemd[1720]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 20 01:40:38.490665 systemd[1720]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 01:40:38.490939 systemd[1720]: Reached target sockets.target - Sockets. Jan 20 01:40:38.531472 systemd[1720]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 20 01:40:38.532045 systemd[1720]: Reached target basic.target - Basic System. Jan 20 01:40:38.532398 systemd[1720]: Reached target default.target - Main User Target. Jan 20 01:40:38.532990 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 01:40:38.533178 systemd[1720]: Startup finished in 5.684s. Jan 20 01:40:38.564944 tar[1638]: linux-amd64/README.md Jan 20 01:40:38.655987 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 01:40:38.774906 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 01:40:38.963990 systemd[1]: Started sshd@1-10.0.0.44:22-10.0.0.1:36316.service - OpenSSH per-connection server daemon (10.0.0.1:36316). Jan 20 01:40:39.440248 kernel: kvm_amd: TSC scaling supported Jan 20 01:40:39.441908 kernel: kvm_amd: Nested Virtualization enabled Jan 20 01:40:39.441945 kernel: kvm_amd: Nested Paging enabled Jan 20 01:40:39.454008 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 20 01:40:39.454365 kernel: kvm_amd: PMU virtualization is disabled Jan 20 01:40:39.631808 sshd[1752]: Accepted publickey for core from 10.0.0.1 port 36316 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 01:40:39.647003 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:40:39.712626 systemd-logind[1623]: New session 2 of user core. Jan 20 01:40:39.806587 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 20 01:40:40.212794 sshd[1755]: Connection closed by 10.0.0.1 port 36316 Jan 20 01:40:40.219122 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Jan 20 01:40:40.288475 systemd[1]: sshd@1-10.0.0.44:22-10.0.0.1:36316.service: Deactivated successfully. Jan 20 01:40:40.321020 systemd[1]: session-2.scope: Deactivated successfully. Jan 20 01:40:40.345089 systemd-logind[1623]: Session 2 logged out. Waiting for processes to exit. Jan 20 01:40:40.355628 systemd-logind[1623]: Removed session 2. Jan 20 01:40:40.374218 systemd[1]: Started sshd@2-10.0.0.44:22-10.0.0.1:36324.service - OpenSSH per-connection server daemon (10.0.0.1:36324). Jan 20 01:40:42.162118 sshd[1761]: Accepted publickey for core from 10.0.0.1 port 36324 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 01:40:42.190587 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:40:42.687269 systemd-logind[1623]: New session 3 of user core. Jan 20 01:40:42.694487 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 01:40:43.080538 sshd[1764]: Connection closed by 10.0.0.1 port 36324 Jan 20 01:40:43.128178 sshd-session[1761]: pam_unix(sshd:session): session closed for user core Jan 20 01:40:43.197797 systemd[1]: sshd@2-10.0.0.44:22-10.0.0.1:36324.service: Deactivated successfully. Jan 20 01:40:43.223831 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 01:40:43.240857 systemd-logind[1623]: Session 3 logged out. Waiting for processes to exit. Jan 20 01:40:43.279136 systemd-logind[1623]: Removed session 3. Jan 20 01:40:50.226970 kernel: EDAC MC: Ver: 3.0.0 Jan 20 01:40:51.784663 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:40:51.789827 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 01:40:51.805028 systemd[1]: Startup finished in 36.884s (kernel) + 1min 30.056s (initrd) + 1min 12.656s (userspace) = 3min 19.598s. Jan 20 01:40:51.882778 (kubelet)[1775]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:40:53.266209 systemd[1]: Started sshd@3-10.0.0.44:22-10.0.0.1:39976.service - OpenSSH per-connection server daemon (10.0.0.1:39976). Jan 20 01:40:54.100590 sshd[1777]: Accepted publickey for core from 10.0.0.1 port 39976 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 01:40:54.114305 sshd-session[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:40:54.198818 systemd-logind[1623]: New session 4 of user core. Jan 20 01:40:54.217943 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 01:40:54.375159 sshd[1785]: Connection closed by 10.0.0.1 port 39976 Jan 20 01:40:54.372633 sshd-session[1777]: pam_unix(sshd:session): session closed for user core Jan 20 01:40:54.450171 systemd[1]: sshd@3-10.0.0.44:22-10.0.0.1:39976.service: Deactivated successfully. Jan 20 01:40:54.469387 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 01:40:54.484880 systemd-logind[1623]: Session 4 logged out. Waiting for processes to exit. Jan 20 01:40:54.509047 systemd[1]: Started sshd@4-10.0.0.44:22-10.0.0.1:58352.service - OpenSSH per-connection server daemon (10.0.0.1:58352). Jan 20 01:40:54.580481 systemd-logind[1623]: Removed session 4. Jan 20 01:40:55.084290 sshd[1791]: Accepted publickey for core from 10.0.0.1 port 58352 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 01:40:55.097641 sshd-session[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:40:55.230665 systemd-logind[1623]: New session 5 of user core. Jan 20 01:40:55.267421 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 01:40:55.466935 sshd[1794]: Connection closed by 10.0.0.1 port 58352 Jan 20 01:40:55.464179 sshd-session[1791]: pam_unix(sshd:session): session closed for user core Jan 20 01:40:55.547856 systemd[1]: sshd@4-10.0.0.44:22-10.0.0.1:58352.service: Deactivated successfully. Jan 20 01:40:55.564077 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 01:40:55.580165 systemd-logind[1623]: Session 5 logged out. Waiting for processes to exit. Jan 20 01:40:55.599899 systemd[1]: Started sshd@5-10.0.0.44:22-10.0.0.1:58366.service - OpenSSH per-connection server daemon (10.0.0.1:58366). Jan 20 01:40:55.612990 systemd-logind[1623]: Removed session 5. Jan 20 01:40:56.445847 sshd[1800]: Accepted publickey for core from 10.0.0.1 port 58366 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 01:40:56.458613 sshd-session[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:40:56.740237 systemd-logind[1623]: New session 6 of user core. Jan 20 01:40:56.778742 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 01:40:56.944892 sshd[1804]: Connection closed by 10.0.0.1 port 58366 Jan 20 01:40:56.946343 sshd-session[1800]: pam_unix(sshd:session): session closed for user core Jan 20 01:40:57.013539 systemd[1]: sshd@5-10.0.0.44:22-10.0.0.1:58366.service: Deactivated successfully. Jan 20 01:40:57.159394 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 01:40:57.181934 systemd-logind[1623]: Session 6 logged out. Waiting for processes to exit. Jan 20 01:40:57.251049 systemd[1]: Started sshd@6-10.0.0.44:22-10.0.0.1:58368.service - OpenSSH per-connection server daemon (10.0.0.1:58368). Jan 20 01:40:57.265576 systemd-logind[1623]: Removed session 6. Jan 20 01:40:58.039132 sshd[1810]: Accepted publickey for core from 10.0.0.1 port 58368 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 01:40:58.050322 sshd-session[1810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:40:58.125856 systemd-logind[1623]: New session 7 of user core. Jan 20 01:40:58.242963 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 01:40:58.583892 sudo[1814]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 01:40:58.584863 sudo[1814]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:40:58.721349 sudo[1814]: pam_unix(sudo:session): session closed for user root Jan 20 01:40:58.833043 sshd[1813]: Connection closed by 10.0.0.1 port 58368 Jan 20 01:40:58.834643 sshd-session[1810]: pam_unix(sshd:session): session closed for user core Jan 20 01:40:58.937841 systemd[1]: sshd@6-10.0.0.44:22-10.0.0.1:58368.service: Deactivated successfully. Jan 20 01:40:58.955296 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 01:40:58.960915 systemd-logind[1623]: Session 7 logged out. Waiting for processes to exit. Jan 20 01:40:59.011378 systemd[1]: Started sshd@7-10.0.0.44:22-10.0.0.1:58382.service - OpenSSH per-connection server daemon (10.0.0.1:58382). Jan 20 01:40:59.022163 systemd-logind[1623]: Removed session 7. Jan 20 01:40:59.382559 kubelet[1775]: E0120 01:40:59.380816 1775 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:40:59.412974 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:40:59.417339 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:40:59.426561 systemd[1]: kubelet.service: Consumed 5.770s CPU time, 271.3M memory peak. Jan 20 01:40:59.570074 sshd[1821]: Accepted publickey for core from 10.0.0.1 port 58382 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 01:40:59.631552 sshd-session[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:40:59.835054 systemd-logind[1623]: New session 8 of user core. Jan 20 01:40:59.860211 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 01:41:00.030600 sudo[1827]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 01:41:00.031513 sudo[1827]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:41:00.235478 sudo[1827]: pam_unix(sudo:session): session closed for user root Jan 20 01:41:00.315004 sudo[1826]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 20 01:41:00.327110 sudo[1826]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:41:00.715363 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 01:41:01.469000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 20 01:41:01.479181 augenrules[1849]: No rules Jan 20 01:41:01.486015 kernel: kauditd_printk_skb: 65 callbacks suppressed Jan 20 01:41:01.486133 kernel: audit: type=1305 audit(1768873261.469:214): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 20 01:41:01.498670 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 01:41:01.502792 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 01:41:01.469000 audit[1849]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd541d33c0 a2=420 a3=0 items=0 ppid=1830 pid=1849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:01.708343 kernel: audit: type=1300 audit(1768873261.469:214): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd541d33c0 a2=420 a3=0 items=0 ppid=1830 pid=1849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:01.714209 kernel: audit: type=1327 audit(1768873261.469:214): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 01:41:01.469000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 01:41:01.733815 sudo[1826]: pam_unix(sudo:session): session closed for user root Jan 20 01:41:01.765026 kernel: audit: type=1130 audit(1768873261.724:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:41:01.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:41:01.806648 sshd[1825]: Connection closed by 10.0.0.1 port 58382 Jan 20 01:41:01.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:41:01.827514 sshd-session[1821]: pam_unix(sshd:session): session closed for user core Jan 20 01:41:01.903007 kernel: audit: type=1131 audit(1768873261.724:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:41:01.733000 audit[1826]: USER_END pid=1826 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 01:41:01.947527 kernel: audit: type=1106 audit(1768873261.733:217): pid=1826 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 01:41:01.956818 kernel: audit: type=1104 audit(1768873261.743:218): pid=1826 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 01:41:01.743000 audit[1826]: CRED_DISP pid=1826 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 01:41:02.067940 kernel: audit: type=1106 audit(1768873261.853:219): pid=1821 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:41:01.853000 audit[1821]: USER_END pid=1821 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:41:02.108872 systemd[1]: sshd@7-10.0.0.44:22-10.0.0.1:58382.service: Deactivated successfully. Jan 20 01:41:02.135019 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 01:41:02.165506 kernel: audit: type=1104 audit(1768873261.853:220): pid=1821 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:41:01.853000 audit[1821]: CRED_DISP pid=1821 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:41:02.150283 systemd-logind[1623]: Session 8 logged out. Waiting for processes to exit. Jan 20 01:41:02.202592 systemd[1]: Started sshd@8-10.0.0.44:22-10.0.0.1:58398.service - OpenSSH per-connection server daemon (10.0.0.1:58398). Jan 20 01:41:02.216555 systemd-logind[1623]: Removed session 8. Jan 20 01:41:02.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.44:22-10.0.0.1:58382 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:41:02.251437 kernel: audit: type=1131 audit(1768873262.109:221): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.44:22-10.0.0.1:58382 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:41:02.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.44:22-10.0.0.1:58398 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:41:02.969000 audit[1858]: USER_ACCT pid=1858 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:41:02.993106 sshd[1858]: Accepted publickey for core from 10.0.0.1 port 58398 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 01:41:03.000000 audit[1858]: CRED_ACQ pid=1858 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:41:03.000000 audit[1858]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe351d28b0 a2=3 a3=0 items=0 ppid=1 pid=1858 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:03.000000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:41:03.009133 sshd-session[1858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:41:03.363612 systemd-logind[1623]: New session 9 of user core. Jan 20 01:41:03.375384 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 01:41:03.412000 audit[1858]: USER_START pid=1858 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:41:03.430000 audit[1861]: CRED_ACQ pid=1861 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:41:03.507246 sudo[1862]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 01:41:03.502000 audit[1862]: USER_ACCT pid=1862 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 01:41:03.511000 audit[1862]: CRED_REFR pid=1862 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 01:41:03.514003 sudo[1862]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:41:03.547000 audit[1862]: USER_START pid=1862 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 01:41:09.307527 update_engine[1625]: I20260120 01:41:09.217936 1625 update_attempter.cc:509] Updating boot flags... Jan 20 01:41:09.578648 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 01:41:09.621117 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:41:14.601775 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 01:41:14.705642 (dockerd)[1901]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 01:41:15.255261 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 20 01:41:15.255428 kernel: audit: type=1130 audit(1768873275.195:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:41:15.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:41:15.200485 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:41:15.300619 (kubelet)[1906]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:41:15.886448 kubelet[1906]: E0120 01:41:15.886018 1906 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:41:15.924251 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:41:15.924565 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:41:15.931596 systemd[1]: kubelet.service: Consumed 1.316s CPU time, 110.3M memory peak. Jan 20 01:41:15.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:41:15.995940 kernel: audit: type=1131 audit(1768873275.930:232): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:41:24.110374 dockerd[1901]: time="2026-01-20T01:41:24.103142401Z" level=info msg="Starting up" Jan 20 01:41:24.388666 dockerd[1901]: time="2026-01-20T01:41:24.185100958Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 20 01:41:27.613024 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 01:41:28.877191 dockerd[1901]: time="2026-01-20T01:41:28.868632460Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 20 01:41:29.596607 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:41:40.697992 dockerd[1901]: time="2026-01-20T01:41:40.695072443Z" level=info msg="Loading containers: start." Jan 20 01:41:41.412261 kernel: Initializing XFRM netlink socket Jan 20 01:41:45.725000 audit[1972]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1972 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:41:45.887393 kernel: audit: type=1325 audit(1768873305.725:233): table=nat:2 family=2 entries=2 op=nft_register_chain pid=1972 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:41:45.946039 kernel: audit: type=1300 audit(1768873305.725:233): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe78caad40 a2=0 a3=0 items=0 ppid=1901 pid=1972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:45.725000 audit[1972]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe78caad40 a2=0 a3=0 items=0 ppid=1901 pid=1972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:45.995747 kernel: audit: type=1327 audit(1768873305.725:233): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 20 01:41:45.725000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 20 01:41:46.051000 audit[1974]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1974 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:41:46.119471 kernel: audit: type=1325 audit(1768873306.051:234): table=filter:3 family=2 entries=2 op=nft_register_chain pid=1974 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:41:46.154908 kernel: audit: type=1300 audit(1768873306.051:234): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffdbd78c0f0 a2=0 a3=0 items=0 ppid=1901 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:46.051000 audit[1974]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffdbd78c0f0 a2=0 a3=0 items=0 ppid=1901 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:46.051000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 20 01:41:46.346049 kernel: audit: type=1327 audit(1768873306.051:234): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 20 01:41:46.382654 kernel: audit: type=1325 audit(1768873306.321:235): table=filter:4 family=2 entries=1 op=nft_register_chain pid=1976 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:41:46.321000 audit[1976]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1976 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:41:46.526256 kernel: audit: type=1300 audit(1768873306.321:235): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc231faaa0 a2=0 a3=0 items=0 ppid=1901 pid=1976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:46.321000 audit[1976]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc231faaa0 a2=0 a3=0 items=0 ppid=1901 pid=1976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:46.321000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 20 01:41:46.729910 kernel: audit: type=1327 audit(1768873306.321:235): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 20 01:41:46.730129 kernel: audit: type=1325 audit(1768873306.551:236): table=filter:5 family=2 entries=1 op=nft_register_chain pid=1978 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:41:46.551000 audit[1978]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1978 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:41:46.551000 audit[1978]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffde0b7240 a2=0 a3=0 items=0 ppid=1901 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:46.551000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 20 01:41:46.747000 audit[1980]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1980 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:41:46.747000 audit[1980]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdb069c7d0 a2=0 a3=0 items=0 ppid=1901 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:46.747000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 20 01:41:46.877000 audit[1982]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1982 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:41:46.877000 audit[1982]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd0e5ffcf0 a2=0 a3=0 items=0 ppid=1901 pid=1982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:46.877000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 01:41:47.090000 audit[1984]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1984 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:41:47.090000 audit[1984]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcdbde0ad0 a2=0 a3=0 items=0 ppid=1901 pid=1984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:47.090000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 01:41:48.315799 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:41:48.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:41:48.427554 (kubelet)[1995]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:41:47.150000 audit[1986]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1986 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:41:47.150000 audit[1986]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff56a4e330 a2=0 a3=0 items=0 ppid=1901 pid=1986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:47.150000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 20 01:41:49.445000 audit[1997]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1997 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:41:49.445000 audit[1997]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffd12744700 a2=0 a3=0 items=0 ppid=1901 pid=1997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:49.445000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 20 01:41:49.900000 audit[2004]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2004 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:41:49.900000 audit[2004]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff1747e280 a2=0 a3=0 items=0 ppid=1901 pid=2004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:49.900000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 20 01:41:49.969000 audit[2006]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2006 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:41:49.969000 audit[2006]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff215262d0 a2=0 a3=0 items=0 ppid=1901 pid=2006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:49.969000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 20 01:41:50.017000 audit[2008]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2008 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:41:50.017000 audit[2008]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd31e7a600 a2=0 a3=0 items=0 ppid=1901 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:50.017000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 01:41:50.045000 audit[2010]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2010 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:41:50.045000 audit[2010]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff62aa7f10 a2=0 a3=0 items=0 ppid=1901 pid=2010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:50.045000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 20 01:41:51.633135 kubelet[1995]: E0120 01:41:51.619404 1995 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:41:51.706937 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:41:51.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:41:51.707455 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:41:51.713217 systemd[1]: kubelet.service: Consumed 2.405s CPU time, 110.6M memory peak. Jan 20 01:41:51.791416 kernel: kauditd_printk_skb: 30 callbacks suppressed Jan 20 01:41:51.804569 kernel: audit: type=1131 audit(1768873311.711:247): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:41:52.954000 audit[2042]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2042 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:41:52.954000 audit[2042]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff5fd1c810 a2=0 a3=0 items=0 ppid=1901 pid=2042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:53.198114 kernel: audit: type=1325 audit(1768873312.954:248): table=nat:15 family=10 entries=2 op=nft_register_chain pid=2042 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:41:53.198338 kernel: audit: type=1300 audit(1768873312.954:248): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff5fd1c810 a2=0 a3=0 items=0 ppid=1901 pid=2042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:52.954000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 20 01:41:53.201988 kernel: audit: type=1327 audit(1768873312.954:248): proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 20 01:41:53.252757 kernel: audit: type=1325 audit(1768873313.080:249): table=filter:16 family=10 entries=2 op=nft_register_chain pid=2044 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:41:53.080000 audit[2044]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2044 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:41:53.440287 kernel: audit: type=1300 audit(1768873313.080:249): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffcab174a60 a2=0 a3=0 items=0 ppid=1901 pid=2044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:53.622850 kernel: audit: type=1327 audit(1768873313.080:249): proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 20 01:41:53.623192 kernel: audit: type=1325 audit(1768873313.153:250): table=filter:17 family=10 entries=1 op=nft_register_chain pid=2046 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:41:53.623322 kernel: audit: type=1300 audit(1768873313.153:250): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe5bc4ecb0 a2=0 a3=0 items=0 ppid=1901 pid=2046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:53.080000 audit[2044]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffcab174a60 a2=0 a3=0 items=0 ppid=1901 pid=2044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:53.080000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 20 01:41:53.153000 audit[2046]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2046 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:41:53.153000 audit[2046]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe5bc4ecb0 a2=0 a3=0 items=0 ppid=1901 pid=2046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:53.153000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 20 01:41:53.810209 kernel: audit: type=1327 audit(1768873313.153:250): proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 20 01:41:53.183000 audit[2048]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2048 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:41:53.183000 audit[2048]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff01f3eb30 a2=0 a3=0 items=0 ppid=1901 pid=2048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:53.183000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 20 01:41:53.221000 audit[2050]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2050 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:41:53.221000 audit[2050]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc7af67f90 a2=0 a3=0 items=0 ppid=1901 pid=2050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:53.221000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 20 01:41:53.232000 audit[2052]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2052 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:41:53.232000 audit[2052]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd2f6ec670 a2=0 a3=0 items=0 ppid=1901 pid=2052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:53.232000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 01:41:53.256000 audit[2054]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2054 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:41:53.256000 audit[2054]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc555fa470 a2=0 a3=0 items=0 ppid=1901 pid=2054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:53.256000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 01:41:53.724000 audit[2056]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2056 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:41:53.724000 audit[2056]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe6c0266c0 a2=0 a3=0 items=0 ppid=1901 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:53.724000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 20 01:41:53.911000 audit[2058]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2058 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:41:53.911000 audit[2058]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffcba714300 a2=0 a3=0 items=0 ppid=1901 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:53.911000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 20 01:41:53.959000 audit[2060]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2060 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:41:53.959000 audit[2060]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffce3bf2100 a2=0 a3=0 items=0 ppid=1901 pid=2060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:53.959000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 20 01:41:54.029000 audit[2062]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2062 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:41:54.029000 audit[2062]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff437d89a0 a2=0 a3=0 items=0 ppid=1901 pid=2062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:54.029000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 20 01:41:54.096000 audit[2064]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2064 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:41:54.096000 audit[2064]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd455fe370 a2=0 a3=0 items=0 ppid=1901 pid=2064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:54.096000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 01:41:54.135000 audit[2066]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2066 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:41:54.135000 audit[2066]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffc6b5725d0 a2=0 a3=0 items=0 ppid=1901 pid=2066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:54.135000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 20 01:41:54.365000 audit[2071]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2071 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:41:54.365000 audit[2071]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffda2d07880 a2=0 a3=0 items=0 ppid=1901 pid=2071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:54.365000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 20 01:41:54.410000 audit[2073]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2073 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:41:54.410000 audit[2073]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd9d6b5a60 a2=0 a3=0 items=0 ppid=1901 pid=2073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:54.410000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 20 01:41:54.454000 audit[2075]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2075 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:41:54.454000 audit[2075]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffecccb6dc0 a2=0 a3=0 items=0 ppid=1901 pid=2075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:54.454000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 20 01:41:54.517000 audit[2077]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2077 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:41:54.517000 audit[2077]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc32946200 a2=0 a3=0 items=0 ppid=1901 pid=2077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:54.517000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 20 01:41:54.557000 audit[2079]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2079 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:41:54.557000 audit[2079]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff2b486c30 a2=0 a3=0 items=0 ppid=1901 pid=2079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:54.557000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 20 01:41:54.609000 audit[2081]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2081 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:41:54.609000 audit[2081]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffece74a7c0 a2=0 a3=0 items=0 ppid=1901 pid=2081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:54.609000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 20 01:41:54.928000 audit[2087]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2087 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:41:54.928000 audit[2087]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffffe17b1f0 a2=0 a3=0 items=0 ppid=1901 pid=2087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:54.928000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 20 01:41:54.999000 audit[2089]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2089 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:41:54.999000 audit[2089]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd5c177cb0 a2=0 a3=0 items=0 ppid=1901 pid=2089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:54.999000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 20 01:41:55.220000 audit[2097]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2097 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:41:55.220000 audit[2097]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffd3c279b80 a2=0 a3=0 items=0 ppid=1901 pid=2097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:55.220000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 20 01:41:55.410000 audit[2103]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2103 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:41:55.410000 audit[2103]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffeaba24890 a2=0 a3=0 items=0 ppid=1901 pid=2103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:55.410000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 20 01:41:55.432000 audit[2105]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2105 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:41:55.432000 audit[2105]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffee88d6f90 a2=0 a3=0 items=0 ppid=1901 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:55.432000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 20 01:41:55.442000 audit[2107]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2107 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:41:55.442000 audit[2107]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff6a061010 a2=0 a3=0 items=0 ppid=1901 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:55.442000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 20 01:41:55.463000 audit[2109]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2109 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:41:55.463000 audit[2109]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffcb739fc70 a2=0 a3=0 items=0 ppid=1901 pid=2109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:55.463000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 01:41:55.515000 audit[2111]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2111 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:41:55.515000 audit[2111]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffeb1fd4ad0 a2=0 a3=0 items=0 ppid=1901 pid=2111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:41:55.515000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 20 01:41:55.523367 systemd-networkd[1534]: docker0: Link UP Jan 20 01:41:55.643011 dockerd[1901]: time="2026-01-20T01:41:55.629475957Z" level=info msg="Loading containers: done." Jan 20 01:42:06.313077 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 20 01:42:06.391415 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:42:06.488495 dockerd[1901]: time="2026-01-20T01:42:06.486342586Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 01:42:06.488495 dockerd[1901]: time="2026-01-20T01:42:06.487131545Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 20 01:42:06.488495 dockerd[1901]: time="2026-01-20T01:42:06.487456245Z" level=info msg="Initializing buildkit" Jan 20 01:42:07.096377 dockerd[1901]: time="2026-01-20T01:42:07.083580052Z" level=info msg="Completed buildkit initialization" Jan 20 01:42:07.178152 dockerd[1901]: time="2026-01-20T01:42:07.178085539Z" level=info msg="Daemon has completed initialization" Jan 20 01:42:07.181576 dockerd[1901]: time="2026-01-20T01:42:07.178920454Z" level=info msg="API listen on /run/docker.sock" Jan 20 01:42:07.199123 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 01:42:07.246548 kernel: kauditd_printk_skb: 72 callbacks suppressed Jan 20 01:42:07.246995 kernel: audit: type=1130 audit(1768873327.199:275): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:42:07.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:42:09.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:42:09.786809 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:42:09.839472 kernel: audit: type=1130 audit(1768873329.786:276): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:42:09.878742 (kubelet)[2158]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:42:10.928254 kubelet[2158]: E0120 01:42:10.924468 2158 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:42:10.945172 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:42:10.945491 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:42:10.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:42:10.950423 systemd[1]: kubelet.service: Consumed 1.056s CPU time, 110.1M memory peak. Jan 20 01:42:10.977101 kernel: audit: type=1131 audit(1768873330.949:277): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:42:21.169861 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 20 01:42:21.308260 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:42:22.985496 containerd[1643]: time="2026-01-20T01:42:22.950758981Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 20 01:42:26.293652 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:42:26.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:42:26.391003 kernel: audit: type=1130 audit(1768873346.292:278): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:42:26.455754 (kubelet)[2179]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:42:31.601774 kubelet[2179]: E0120 01:42:31.596494 2179 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:42:31.640630 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:42:31.641037 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:42:31.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:42:31.734901 systemd[1]: kubelet.service: Consumed 2.283s CPU time, 112.5M memory peak. Jan 20 01:42:31.782888 kernel: audit: type=1131 audit(1768873351.721:279): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:42:32.879956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3507671137.mount: Deactivated successfully. Jan 20 01:42:42.508389 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 20 01:42:42.626411 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:42:56.636086 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:42:56.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:42:56.724655 kernel: audit: type=1130 audit(1768873376.642:280): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:42:56.797972 (kubelet)[2253]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:43:00.457648 kubelet[2253]: E0120 01:43:00.456078 2253 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:43:00.735615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:43:00.738933 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:43:00.760061 systemd[1]: kubelet.service: Consumed 3.730s CPU time, 112.5M memory peak. Jan 20 01:43:00.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:43:00.858257 kernel: audit: type=1131 audit(1768873380.757:281): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:43:11.602520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 20 01:43:11.648817 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:43:19.078361 containerd[1643]: time="2026-01-20T01:43:19.072651581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:43:19.107550 containerd[1643]: time="2026-01-20T01:43:19.107380639Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30106330" Jan 20 01:43:19.147622 containerd[1643]: time="2026-01-20T01:43:19.147146327Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:43:19.200501 containerd[1643]: time="2026-01-20T01:43:19.199478453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:43:19.214588 containerd[1643]: time="2026-01-20T01:43:19.214314636Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 56.263165657s" Jan 20 01:43:19.214588 containerd[1643]: time="2026-01-20T01:43:19.214540052Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 20 01:43:19.251361 containerd[1643]: time="2026-01-20T01:43:19.250830989Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 20 01:43:20.158166 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:43:20.339448 kernel: audit: type=1130 audit(1768873400.214:282): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:43:20.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:43:20.539464 (kubelet)[2270]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:43:25.080888 kubelet[2270]: E0120 01:43:25.076649 2270 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:43:25.098262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:43:25.098648 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:43:25.103149 systemd[1]: kubelet.service: Consumed 2.250s CPU time, 112.5M memory peak. Jan 20 01:43:25.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:43:25.135428 kernel: audit: type=1131 audit(1768873405.101:283): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:43:35.837620 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 20 01:43:36.118003 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:43:41.376093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:43:41.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:43:41.561370 kernel: audit: type=1130 audit(1768873421.374:284): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:43:41.633577 (kubelet)[2291]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:43:47.801645 kubelet[2291]: E0120 01:43:47.800531 2291 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:43:47.852006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:43:47.852342 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:43:47.893666 systemd[1]: kubelet.service: Consumed 2.581s CPU time, 110.5M memory peak. Jan 20 01:43:47.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:43:47.993879 kernel: audit: type=1131 audit(1768873427.892:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:43:58.259432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 20 01:43:58.415002 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:44:01.835914 containerd[1643]: time="2026-01-20T01:44:01.834888072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:44:01.884474 containerd[1643]: time="2026-01-20T01:44:01.877550715Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Jan 20 01:44:01.918187 containerd[1643]: time="2026-01-20T01:44:01.918106845Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:44:01.974499 containerd[1643]: time="2026-01-20T01:44:01.972032083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:44:01.974499 containerd[1643]: time="2026-01-20T01:44:01.973253660Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 42.722364162s" Jan 20 01:44:01.974499 containerd[1643]: time="2026-01-20T01:44:01.973353546Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 20 01:44:02.002207 containerd[1643]: time="2026-01-20T01:44:02.001821952Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 20 01:44:03.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:44:03.027130 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:44:03.204004 kernel: audit: type=1130 audit(1768873443.025:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:44:03.247324 (kubelet)[2308]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:44:05.844620 kubelet[2308]: E0120 01:44:05.840090 2308 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:44:05.884031 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:44:05.884449 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:44:05.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:44:05.890598 systemd[1]: kubelet.service: Consumed 1.279s CPU time, 108.4M memory peak. Jan 20 01:44:05.951336 kernel: audit: type=1131 audit(1768873445.889:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:44:16.100288 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 20 01:44:16.148135 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:44:20.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:44:20.892157 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:44:20.981789 kernel: audit: type=1130 audit(1768873460.891:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:44:21.005260 (kubelet)[2329]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:44:23.836936 kubelet[2329]: E0120 01:44:23.835884 2329 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:44:23.864164 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:44:23.868981 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:44:23.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:44:23.916465 systemd[1]: kubelet.service: Consumed 1.499s CPU time, 109.1M memory peak. Jan 20 01:44:23.951098 containerd[1643]: time="2026-01-20T01:44:23.905161476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:44:23.951098 containerd[1643]: time="2026-01-20T01:44:23.932470649Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20151328" Jan 20 01:44:23.951098 containerd[1643]: time="2026-01-20T01:44:23.945779257Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:44:23.965283 kernel: audit: type=1131 audit(1768873463.901:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:44:24.210734 containerd[1643]: time="2026-01-20T01:44:24.187476790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:44:24.219796 containerd[1643]: time="2026-01-20T01:44:24.206337382Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 22.204397104s" Jan 20 01:44:24.219796 containerd[1643]: time="2026-01-20T01:44:24.217650212Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 20 01:44:24.265322 containerd[1643]: time="2026-01-20T01:44:24.261020352Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 20 01:44:34.119279 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 20 01:44:34.191782 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:44:38.979850 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:44:38.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:44:38.992005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount865957680.mount: Deactivated successfully. Jan 20 01:44:39.016857 kernel: audit: type=1130 audit(1768873478.981:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:44:39.032488 (kubelet)[2352]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:44:39.858844 kubelet[2352]: E0120 01:44:39.855006 2352 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:44:39.921881 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:44:39.922327 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:44:39.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:44:39.929163 systemd[1]: kubelet.service: Consumed 1.108s CPU time, 110.6M memory peak. Jan 20 01:44:40.031781 kernel: audit: type=1131 audit(1768873479.928:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:44:50.175178 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 20 01:44:50.228848 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:44:53.225339 containerd[1643]: time="2026-01-20T01:44:53.218868723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:44:53.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:44:53.230327 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:44:53.400555 kernel: audit: type=1130 audit(1768873493.229:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:44:53.400978 containerd[1643]: time="2026-01-20T01:44:53.246984225Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31926374" Jan 20 01:44:53.400978 containerd[1643]: time="2026-01-20T01:44:53.366416349Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:44:53.408013 containerd[1643]: time="2026-01-20T01:44:53.404074363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:44:53.408013 containerd[1643]: time="2026-01-20T01:44:53.405657914Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 29.144572264s" Jan 20 01:44:53.408013 containerd[1643]: time="2026-01-20T01:44:53.405849753Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 20 01:44:53.459867 containerd[1643]: time="2026-01-20T01:44:53.457860952Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 20 01:44:53.495426 (kubelet)[2373]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:44:55.366421 kubelet[2373]: E0120 01:44:55.353505 2373 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:44:55.403248 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:44:55.403826 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:44:55.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:44:55.422878 systemd[1]: kubelet.service: Consumed 1.286s CPU time, 110.6M memory peak. Jan 20 01:44:55.503150 kernel: audit: type=1131 audit(1768873495.421:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:44:57.682610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3669565079.mount: Deactivated successfully. Jan 20 01:45:05.627037 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 20 01:45:05.827629 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:45:08.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:45:08.926051 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:45:09.001206 kernel: audit: type=1130 audit(1768873508.929:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:45:09.023655 (kubelet)[2442]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:45:11.054890 kubelet[2442]: E0120 01:45:11.053671 2442 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:45:11.087413 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:45:11.087978 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:45:11.100501 systemd[1]: kubelet.service: Consumed 1.047s CPU time, 110.2M memory peak. Jan 20 01:45:11.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:45:11.169064 kernel: audit: type=1131 audit(1768873511.099:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:45:12.675588 containerd[1643]: time="2026-01-20T01:45:12.672508710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:45:12.713824 containerd[1643]: time="2026-01-20T01:45:12.712188403Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20814341" Jan 20 01:45:12.738192 containerd[1643]: time="2026-01-20T01:45:12.734439764Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:45:12.761645 containerd[1643]: time="2026-01-20T01:45:12.759406600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:45:12.780813 containerd[1643]: time="2026-01-20T01:45:12.776400063Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 19.315303702s" Jan 20 01:45:12.780813 containerd[1643]: time="2026-01-20T01:45:12.776472257Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 20 01:45:12.807568 containerd[1643]: time="2026-01-20T01:45:12.806606207Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 20 01:45:14.073134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1355425013.mount: Deactivated successfully. Jan 20 01:45:14.120947 containerd[1643]: time="2026-01-20T01:45:14.117023728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:45:14.131494 containerd[1643]: time="2026-01-20T01:45:14.128167937Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 20 01:45:14.131494 containerd[1643]: time="2026-01-20T01:45:14.131253634Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:45:14.137324 containerd[1643]: time="2026-01-20T01:45:14.136392035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:45:14.141109 containerd[1643]: time="2026-01-20T01:45:14.139358953Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.332548574s" Jan 20 01:45:14.141109 containerd[1643]: time="2026-01-20T01:45:14.139446022Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 20 01:45:14.142521 containerd[1643]: time="2026-01-20T01:45:14.141578167Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 20 01:45:20.103953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3696059607.mount: Deactivated successfully. Jan 20 01:45:21.330399 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 20 01:45:21.424160 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:45:32.359954 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 3756056623 wd_nsec: 3756055013 Jan 20 01:45:36.579986 systemd[1720]: Created slice background.slice - User Background Tasks Slice. Jan 20 01:45:36.622117 systemd[1720]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Jan 20 01:45:37.342867 systemd[1720]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Jan 20 01:45:37.828398 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:45:37.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:45:38.070455 kernel: audit: type=1130 audit(1768873537.838:296): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:45:38.437901 (kubelet)[2493]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:45:39.498133 kubelet[2493]: E0120 01:45:39.497464 2493 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:45:39.537939 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:45:39.538326 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:45:39.553060 systemd[1]: kubelet.service: Consumed 2.231s CPU time, 110.1M memory peak. Jan 20 01:45:39.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:45:39.660458 kernel: audit: type=1131 audit(1768873539.552:297): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:45:49.581623 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Jan 20 01:45:49.606943 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:45:56.836279 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:45:56.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:45:57.075897 kernel: audit: type=1130 audit(1768873556.844:298): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:45:57.149893 (kubelet)[2542]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:46:01.610934 kubelet[2542]: E0120 01:46:01.610318 2542 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:46:01.652580 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:46:01.653104 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:46:01.671534 systemd[1]: kubelet.service: Consumed 1.639s CPU time, 109M memory peak. Jan 20 01:46:01.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:46:01.750302 kernel: audit: type=1131 audit(1768873561.670:299): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:46:11.826239 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 15. Jan 20 01:46:11.889533 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:46:17.645777 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:46:17.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:46:17.730124 kernel: audit: type=1130 audit(1768873577.644:300): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:46:17.754951 (kubelet)[2562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:46:18.226979 containerd[1643]: time="2026-01-20T01:46:18.226839920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:46:18.237371 containerd[1643]: time="2026-01-20T01:46:18.236754988Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58916088" Jan 20 01:46:18.246596 containerd[1643]: time="2026-01-20T01:46:18.246533722Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:46:18.261214 containerd[1643]: time="2026-01-20T01:46:18.261116264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:46:18.265865 containerd[1643]: time="2026-01-20T01:46:18.265814064Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 1m4.123983201s" Jan 20 01:46:18.479548 containerd[1643]: time="2026-01-20T01:46:18.475219914Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 20 01:46:18.696803 kubelet[2562]: E0120 01:46:18.695998 2562 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:46:18.705618 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:46:18.705979 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:46:18.710845 systemd[1]: kubelet.service: Consumed 1.323s CPU time, 108.8M memory peak. Jan 20 01:46:18.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:46:18.723859 kernel: audit: type=1131 audit(1768873578.709:301): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:46:28.832175 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 16. Jan 20 01:46:29.243261 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:46:39.937507 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:46:39.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:46:39.998533 kernel: audit: type=1130 audit(1768873599.934:302): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:46:40.024237 (kubelet)[2600]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:46:42.135471 kubelet[2600]: E0120 01:46:42.134919 2600 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:46:42.348587 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:46:42.351622 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:46:42.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:46:42.360000 systemd[1]: kubelet.service: Consumed 2.001s CPU time, 109.1M memory peak. Jan 20 01:46:42.435986 kernel: audit: type=1131 audit(1768873602.359:303): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:46:55.223511 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 17. Jan 20 01:46:56.558638 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:47:19.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:19.038453 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:47:19.105075 kernel: audit: type=1130 audit(1768873639.038:304): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:19.161766 (kubelet)[2618]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:47:21.124287 kubelet[2618]: E0120 01:47:21.121324 2618 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:47:21.173209 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:47:21.174156 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:47:21.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:47:21.194492 systemd[1]: kubelet.service: Consumed 2.250s CPU time, 110.9M memory peak. Jan 20 01:47:21.269186 kernel: audit: type=1131 audit(1768873641.192:305): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:47:31.824604 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 18. Jan 20 01:47:32.062362 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:47:40.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:47:40.322554 kernel: audit: type=1130 audit(1768873660.228:306): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:47:40.224045 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 01:47:40.224353 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 01:47:40.225420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:47:40.374642 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:47:41.489993 systemd[1]: Reload requested from client PID 2638 ('systemctl') (unit session-9.scope)... Jan 20 01:47:41.492268 systemd[1]: Reloading... Jan 20 01:48:04.265083 zram_generator::config[2685]: No configuration found. Jan 20 01:48:09.572003 systemd[1]: Reloading finished in 28078 ms. Jan 20 01:48:09.817000 audit: BPF prog-id=61 op=LOAD Jan 20 01:48:09.849192 kernel: audit: type=1334 audit(1768873689.817:307): prog-id=61 op=LOAD Jan 20 01:48:09.849556 kernel: audit: type=1334 audit(1768873689.822:308): prog-id=57 op=UNLOAD Jan 20 01:48:09.822000 audit: BPF prog-id=57 op=UNLOAD Jan 20 01:48:09.849000 audit: BPF prog-id=62 op=LOAD Jan 20 01:48:09.849000 audit: BPF prog-id=45 op=UNLOAD Jan 20 01:48:09.935136 kernel: audit: type=1334 audit(1768873689.849:309): prog-id=62 op=LOAD Jan 20 01:48:09.938953 kernel: audit: type=1334 audit(1768873689.849:310): prog-id=45 op=UNLOAD Jan 20 01:48:09.939021 kernel: audit: type=1334 audit(1768873689.853:311): prog-id=63 op=LOAD Jan 20 01:48:09.853000 audit: BPF prog-id=63 op=LOAD Jan 20 01:48:09.853000 audit: BPF prog-id=64 op=LOAD Jan 20 01:48:09.960489 kernel: audit: type=1334 audit(1768873689.853:312): prog-id=64 op=LOAD Jan 20 01:48:09.853000 audit: BPF prog-id=46 op=UNLOAD Jan 20 01:48:10.014012 kernel: audit: type=1334 audit(1768873689.853:313): prog-id=46 op=UNLOAD Jan 20 01:48:10.014177 kernel: audit: type=1334 audit(1768873689.853:314): prog-id=47 op=UNLOAD Jan 20 01:48:09.853000 audit: BPF prog-id=47 op=UNLOAD Jan 20 01:48:09.861000 audit: BPF prog-id=65 op=LOAD Jan 20 01:48:10.042765 kernel: audit: type=1334 audit(1768873689.861:315): prog-id=65 op=LOAD Jan 20 01:48:09.862000 audit: BPF prog-id=56 op=UNLOAD Jan 20 01:48:10.089603 kernel: audit: type=1334 audit(1768873689.862:316): prog-id=56 op=UNLOAD Jan 20 01:48:09.875000 audit: BPF prog-id=66 op=LOAD Jan 20 01:48:09.877000 audit: BPF prog-id=67 op=LOAD Jan 20 01:48:09.877000 audit: BPF prog-id=54 op=UNLOAD Jan 20 01:48:09.877000 audit: BPF prog-id=55 op=UNLOAD Jan 20 01:48:09.902000 audit: BPF prog-id=68 op=LOAD Jan 20 01:48:09.902000 audit: BPF prog-id=51 op=UNLOAD Jan 20 01:48:09.902000 audit: BPF prog-id=69 op=LOAD Jan 20 01:48:09.902000 audit: BPF prog-id=70 op=LOAD Jan 20 01:48:09.902000 audit: BPF prog-id=52 op=UNLOAD Jan 20 01:48:09.902000 audit: BPF prog-id=53 op=UNLOAD Jan 20 01:48:09.919000 audit: BPF prog-id=71 op=LOAD Jan 20 01:48:09.919000 audit: BPF prog-id=58 op=UNLOAD Jan 20 01:48:09.926000 audit: BPF prog-id=72 op=LOAD Jan 20 01:48:09.926000 audit: BPF prog-id=73 op=LOAD Jan 20 01:48:09.926000 audit: BPF prog-id=59 op=UNLOAD Jan 20 01:48:09.926000 audit: BPF prog-id=60 op=UNLOAD Jan 20 01:48:09.952000 audit: BPF prog-id=74 op=LOAD Jan 20 01:48:09.952000 audit: BPF prog-id=48 op=UNLOAD Jan 20 01:48:09.953000 audit: BPF prog-id=75 op=LOAD Jan 20 01:48:09.953000 audit: BPF prog-id=76 op=LOAD Jan 20 01:48:09.953000 audit: BPF prog-id=49 op=UNLOAD Jan 20 01:48:09.953000 audit: BPF prog-id=50 op=UNLOAD Jan 20 01:48:09.982000 audit: BPF prog-id=77 op=LOAD Jan 20 01:48:09.982000 audit: BPF prog-id=42 op=UNLOAD Jan 20 01:48:09.982000 audit: BPF prog-id=78 op=LOAD Jan 20 01:48:09.982000 audit: BPF prog-id=79 op=LOAD Jan 20 01:48:09.982000 audit: BPF prog-id=43 op=UNLOAD Jan 20 01:48:09.982000 audit: BPF prog-id=44 op=UNLOAD Jan 20 01:48:09.991000 audit: BPF prog-id=80 op=LOAD Jan 20 01:48:09.991000 audit: BPF prog-id=41 op=UNLOAD Jan 20 01:48:10.267074 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:48:10.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:10.340396 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:48:10.360824 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 01:48:10.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:10.364958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:48:10.365073 systemd[1]: kubelet.service: Consumed 764ms CPU time, 98.4M memory peak. Jan 20 01:48:10.390472 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:48:13.158275 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:48:13.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:13.243061 (kubelet)[2734]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 01:48:14.121776 kubelet[2734]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:48:14.121776 kubelet[2734]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 01:48:14.121776 kubelet[2734]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:48:14.146527 kubelet[2734]: I0120 01:48:14.130212 2734 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 01:48:14.913126 kubelet[2734]: I0120 01:48:14.911166 2734 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 20 01:48:14.914654 kubelet[2734]: I0120 01:48:14.913735 2734 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 01:48:14.914654 kubelet[2734]: I0120 01:48:14.914186 2734 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 01:48:15.520196 kubelet[2734]: E0120 01:48:15.519865 2734 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.44:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 01:48:15.577652 kubelet[2734]: I0120 01:48:15.563331 2734 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 01:48:15.940084 kubelet[2734]: I0120 01:48:15.937043 2734 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 01:48:16.051422 kubelet[2734]: I0120 01:48:16.049178 2734 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 01:48:16.065727 kubelet[2734]: I0120 01:48:16.061670 2734 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 01:48:16.065727 kubelet[2734]: I0120 01:48:16.061838 2734 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 01:48:16.068405 kubelet[2734]: I0120 01:48:16.068068 2734 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 01:48:16.068405 kubelet[2734]: I0120 01:48:16.068182 2734 container_manager_linux.go:303] "Creating device plugin manager" Jan 20 01:48:16.075089 kubelet[2734]: I0120 01:48:16.070374 2734 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:48:16.087087 kubelet[2734]: I0120 01:48:16.084975 2734 kubelet.go:480] "Attempting to sync node with API server" Jan 20 01:48:16.087087 kubelet[2734]: I0120 01:48:16.085100 2734 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 01:48:16.087087 kubelet[2734]: I0120 01:48:16.085299 2734 kubelet.go:386] "Adding apiserver pod source" Jan 20 01:48:16.087087 kubelet[2734]: I0120 01:48:16.085376 2734 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 01:48:16.142989 kubelet[2734]: E0120 01:48:16.140760 2734 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 01:48:16.142989 kubelet[2734]: I0120 01:48:16.141069 2734 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 01:48:16.169576 kubelet[2734]: I0120 01:48:16.166859 2734 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 01:48:16.183507 kubelet[2734]: W0120 01:48:16.173430 2734 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 01:48:16.183507 kubelet[2734]: E0120 01:48:16.179101 2734 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 01:48:16.280659 kubelet[2734]: I0120 01:48:16.264079 2734 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 01:48:16.280659 kubelet[2734]: I0120 01:48:16.264310 2734 server.go:1289] "Started kubelet" Jan 20 01:48:16.280659 kubelet[2734]: I0120 01:48:16.274368 2734 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 01:48:16.314034 kubelet[2734]: I0120 01:48:16.311414 2734 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 01:48:16.314034 kubelet[2734]: I0120 01:48:16.313296 2734 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 01:48:16.666530 kubelet[2734]: I0120 01:48:16.665268 2734 server.go:317] "Adding debug handlers to kubelet server" Jan 20 01:48:16.682056 kubelet[2734]: I0120 01:48:16.682018 2734 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 01:48:16.859397 kubelet[2734]: E0120 01:48:16.662166 2734 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.44:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.44:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4d413e5b6ca5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:48:16.264154277 +0000 UTC m=+2.911408466,LastTimestamp:2026-01-20 01:48:16.264154277 +0000 UTC m=+2.911408466,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:48:16.866304 kubelet[2734]: I0120 01:48:16.866159 2734 factory.go:223] Registration of the systemd container factory successfully Jan 20 01:48:16.866471 kubelet[2734]: I0120 01:48:16.866399 2734 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 01:48:16.873765 kubelet[2734]: I0120 01:48:16.869454 2734 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 01:48:16.873765 kubelet[2734]: E0120 01:48:16.870304 2734 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:48:16.884834 kubelet[2734]: E0120 01:48:16.878126 2734 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="200ms" Jan 20 01:48:16.886081 kubelet[2734]: I0120 01:48:16.885934 2734 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 01:48:16.893052 kubelet[2734]: I0120 01:48:16.886596 2734 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 01:48:16.926086 kubelet[2734]: I0120 01:48:16.925897 2734 factory.go:223] Registration of the containerd container factory successfully Jan 20 01:48:16.932759 kubelet[2734]: I0120 01:48:16.927967 2734 reconciler.go:26] "Reconciler: start to sync state" Jan 20 01:48:16.932759 kubelet[2734]: E0120 01:48:16.928379 2734 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 01:48:16.949174 kubelet[2734]: E0120 01:48:16.949112 2734 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 01:48:16.981781 kubelet[2734]: E0120 01:48:16.978885 2734 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:48:17.079948 kubelet[2734]: E0120 01:48:17.079888 2734 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="400ms" Jan 20 01:48:17.080446 kubelet[2734]: E0120 01:48:17.080221 2734 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:48:17.146110 kernel: kauditd_printk_skb: 33 callbacks suppressed Jan 20 01:48:17.146274 kernel: audit: type=1325 audit(1768873697.122:350): table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2757 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:48:17.122000 audit[2757]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2757 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:48:17.122000 audit[2757]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe039dd8a0 a2=0 a3=0 items=0 ppid=2734 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:17.184916 kubelet[2734]: E0120 01:48:17.184370 2734 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:48:17.212984 kernel: audit: type=1300 audit(1768873697.122:350): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe039dd8a0 a2=0 a3=0 items=0 ppid=2734 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:17.122000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 20 01:48:17.252791 kubelet[2734]: I0120 01:48:17.251111 2734 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 01:48:17.252791 kubelet[2734]: I0120 01:48:17.251150 2734 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 01:48:17.252791 kubelet[2734]: I0120 01:48:17.251180 2734 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:48:17.272973 kernel: audit: type=1327 audit(1768873697.122:350): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 20 01:48:17.273163 kernel: audit: type=1325 audit(1768873697.157:351): table=filter:43 family=2 entries=1 op=nft_register_chain pid=2758 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:48:17.157000 audit[2758]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2758 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:48:17.157000 audit[2758]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0ad584a0 a2=0 a3=0 items=0 ppid=2734 pid=2758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:17.623915 kernel: audit: type=1300 audit(1768873697.157:351): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0ad584a0 a2=0 a3=0 items=0 ppid=2734 pid=2758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:17.783770 kernel: audit: type=1327 audit(1768873697.157:351): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 20 01:48:17.840519 kernel: audit: type=1325 audit(1768873697.215:352): table=filter:44 family=2 entries=2 op=nft_register_chain pid=2761 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:48:17.840839 kernel: audit: type=1300 audit(1768873697.215:352): arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd1b5d42c0 a2=0 a3=0 items=0 ppid=2734 pid=2761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:17.840986 kernel: audit: type=1327 audit(1768873697.215:352): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 01:48:17.841024 kernel: audit: type=1325 audit(1768873697.266:353): table=filter:45 family=2 entries=2 op=nft_register_chain pid=2764 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:48:17.157000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 20 01:48:17.215000 audit[2761]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2761 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:48:17.215000 audit[2761]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd1b5d42c0 a2=0 a3=0 items=0 ppid=2734 pid=2761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:17.215000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 01:48:17.266000 audit[2764]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2764 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:48:17.266000 audit[2764]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd7a09b640 a2=0 a3=0 items=0 ppid=2734 pid=2764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:17.266000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 01:48:18.030984 kubelet[2734]: E0120 01:48:17.940222 2734 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:48:18.045351 kubelet[2734]: I0120 01:48:18.040102 2734 policy_none.go:49] "None policy: Start" Jan 20 01:48:18.045351 kubelet[2734]: I0120 01:48:18.040595 2734 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 01:48:18.051102 kubelet[2734]: I0120 01:48:18.048594 2734 state_mem.go:35] "Initializing new in-memory state store" Jan 20 01:48:18.051102 kubelet[2734]: E0120 01:48:18.049038 2734 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:48:18.051102 kubelet[2734]: E0120 01:48:18.049313 2734 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 01:48:18.051102 kubelet[2734]: E0120 01:48:18.049414 2734 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 01:48:18.059416 kubelet[2734]: E0120 01:48:18.057796 2734 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="800ms" Jan 20 01:48:18.062949 kubelet[2734]: E0120 01:48:18.060455 2734 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 01:48:18.063378 kubelet[2734]: E0120 01:48:18.063244 2734 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.44:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 01:48:18.153280 kubelet[2734]: E0120 01:48:18.149190 2734 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:48:18.151000 audit[2767]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2767 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:48:18.151000 audit[2767]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc859a6b10 a2=0 a3=0 items=0 ppid=2734 pid=2767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:18.151000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 20 01:48:18.167921 kubelet[2734]: I0120 01:48:18.155161 2734 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 20 01:48:18.169000 audit[2769]: NETFILTER_CFG table=mangle:47 family=2 entries=1 op=nft_register_chain pid=2769 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:48:18.169000 audit[2769]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc3c638fe0 a2=0 a3=0 items=0 ppid=2734 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:18.169000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 20 01:48:18.170000 audit[2768]: NETFILTER_CFG table=mangle:48 family=10 entries=2 op=nft_register_chain pid=2768 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:48:18.170000 audit[2768]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff0f78ad90 a2=0 a3=0 items=0 ppid=2734 pid=2768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:18.170000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 20 01:48:18.195614 kubelet[2734]: I0120 01:48:18.180174 2734 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 20 01:48:18.195614 kubelet[2734]: I0120 01:48:18.180327 2734 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 20 01:48:18.195614 kubelet[2734]: I0120 01:48:18.180400 2734 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 01:48:18.195614 kubelet[2734]: I0120 01:48:18.180415 2734 kubelet.go:2436] "Starting kubelet main sync loop" Jan 20 01:48:18.195614 kubelet[2734]: E0120 01:48:18.180653 2734 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 01:48:18.195614 kubelet[2734]: E0120 01:48:18.182548 2734 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 01:48:18.206000 audit[2771]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2771 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:48:18.206000 audit[2771]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd2c0c0990 a2=0 a3=0 items=0 ppid=2734 pid=2771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:18.206000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 20 01:48:18.210251 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 01:48:18.224000 audit[2772]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2772 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:48:18.224000 audit[2772]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff0ed30260 a2=0 a3=0 items=0 ppid=2734 pid=2772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:18.224000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 20 01:48:18.253043 kubelet[2734]: E0120 01:48:18.250234 2734 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:48:18.248000 audit[2773]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2773 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:48:18.248000 audit[2773]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffef76c9560 a2=0 a3=0 items=0 ppid=2734 pid=2773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:18.248000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 20 01:48:18.273000 audit[2774]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2774 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:48:18.273000 audit[2774]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff2ebd3fb0 a2=0 a3=0 items=0 ppid=2734 pid=2774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:18.273000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 20 01:48:18.281493 kubelet[2734]: E0120 01:48:18.281395 2734 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 01:48:18.304599 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 01:48:18.322000 audit[2775]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2775 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:48:18.322000 audit[2775]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe8420ede0 a2=0 a3=0 items=0 ppid=2734 pid=2775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:18.322000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 20 01:48:18.339180 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 01:48:18.351930 kubelet[2734]: E0120 01:48:18.351539 2734 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:48:18.370664 kubelet[2734]: E0120 01:48:18.364812 2734 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 01:48:18.370664 kubelet[2734]: I0120 01:48:18.365380 2734 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 01:48:18.370664 kubelet[2734]: I0120 01:48:18.365470 2734 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 01:48:18.373243 kubelet[2734]: I0120 01:48:18.372608 2734 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 01:48:18.385328 kubelet[2734]: E0120 01:48:18.378828 2734 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 01:48:18.385328 kubelet[2734]: E0120 01:48:18.379816 2734 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:48:18.778875 kubelet[2734]: E0120 01:48:18.770439 2734 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.44:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.44:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4d413e5b6ca5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:48:16.264154277 +0000 UTC m=+2.911408466,LastTimestamp:2026-01-20 01:48:16.264154277 +0000 UTC m=+2.911408466,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:48:18.811105 kubelet[2734]: I0120 01:48:18.773048 2734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a23d1c402d2fa864b27c69acb395c2a8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a23d1c402d2fa864b27c69acb395c2a8\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:48:18.811105 kubelet[2734]: I0120 01:48:18.780372 2734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a23d1c402d2fa864b27c69acb395c2a8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a23d1c402d2fa864b27c69acb395c2a8\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:48:18.811105 kubelet[2734]: I0120 01:48:18.780504 2734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a23d1c402d2fa864b27c69acb395c2a8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a23d1c402d2fa864b27c69acb395c2a8\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:48:18.821115 kubelet[2734]: I0120 01:48:18.819455 2734 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:48:18.865932 kubelet[2734]: E0120 01:48:18.850808 2734 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.44:6443/api/v1/nodes\": dial tcp 10.0.0.44:6443: connect: connection refused" node="localhost" Jan 20 01:48:18.986103 kubelet[2734]: E0120 01:48:18.978543 2734 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="1.6s" Jan 20 01:48:19.123450 kubelet[2734]: I0120 01:48:19.120252 2734 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:48:19.123450 kubelet[2734]: E0120 01:48:19.120827 2734 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.44:6443/api/v1/nodes\": dial tcp 10.0.0.44:6443: connect: connection refused" node="localhost" Jan 20 01:48:19.123450 kubelet[2734]: I0120 01:48:19.122086 2734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:48:19.123450 kubelet[2734]: I0120 01:48:19.122132 2734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:48:19.123450 kubelet[2734]: I0120 01:48:19.122157 2734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:48:19.123450 kubelet[2734]: I0120 01:48:19.122177 2734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:48:19.125498 kubelet[2734]: I0120 01:48:19.122197 2734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:48:19.174201 systemd[1]: Created slice kubepods-burstable-poda23d1c402d2fa864b27c69acb395c2a8.slice - libcontainer container kubepods-burstable-poda23d1c402d2fa864b27c69acb395c2a8.slice. Jan 20 01:48:19.230653 kubelet[2734]: I0120 01:48:19.230456 2734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 20 01:48:19.274225 kubelet[2734]: E0120 01:48:19.271284 2734 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:48:19.285315 kubelet[2734]: E0120 01:48:19.277362 2734 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:19.283137 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Jan 20 01:48:19.311453 containerd[1643]: time="2026-01-20T01:48:19.307017713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a23d1c402d2fa864b27c69acb395c2a8,Namespace:kube-system,Attempt:0,}" Jan 20 01:48:19.358181 kubelet[2734]: E0120 01:48:19.355512 2734 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:48:19.361431 kubelet[2734]: E0120 01:48:19.359316 2734 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:19.361541 containerd[1643]: time="2026-01-20T01:48:19.360005851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Jan 20 01:48:19.434615 kubelet[2734]: E0120 01:48:19.434215 2734 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 01:48:19.447815 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Jan 20 01:48:19.481051 kubelet[2734]: E0120 01:48:19.481009 2734 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:48:19.481836 kubelet[2734]: E0120 01:48:19.481811 2734 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:19.486261 containerd[1643]: time="2026-01-20T01:48:19.486027536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Jan 20 01:48:19.558066 kubelet[2734]: I0120 01:48:19.556942 2734 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:48:19.562295 kubelet[2734]: E0120 01:48:19.559869 2734 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.44:6443/api/v1/nodes\": dial tcp 10.0.0.44:6443: connect: connection refused" node="localhost" Jan 20 01:48:19.755607 containerd[1643]: time="2026-01-20T01:48:19.755304841Z" level=info msg="connecting to shim 031289bb32c5540cad52ff70b85767c3a2b55e3d17aad2550b4d749f716f9e7a" address="unix:///run/containerd/s/adbcef3ee3c5a6ebf35e7d50672629f7bc553c77f0fef6dfe0dc49de4b8e5235" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:48:19.842622 containerd[1643]: time="2026-01-20T01:48:19.842359589Z" level=info msg="connecting to shim c885f762d91cfb67fe429fbaedcc24674553182e035ca4156ebc0a71391bd6f3" address="unix:///run/containerd/s/4aea575429a2cd7e5f9f767a35b5e4a00b86ff3e051b43713b2194618990aec5" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:48:19.848343 containerd[1643]: time="2026-01-20T01:48:19.847942922Z" level=info msg="connecting to shim 27db5926942211e02da172dd2d138e4f623b3bbb1024c8e4d172c2b1c0719a12" address="unix:///run/containerd/s/1b4b994f60f5f41dac22e31393bf5ef29764d0520f00532969a72a584aec2f98" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:48:20.185016 systemd[1]: Started cri-containerd-031289bb32c5540cad52ff70b85767c3a2b55e3d17aad2550b4d749f716f9e7a.scope - libcontainer container 031289bb32c5540cad52ff70b85767c3a2b55e3d17aad2550b4d749f716f9e7a. Jan 20 01:48:20.201484 systemd[1]: Started cri-containerd-27db5926942211e02da172dd2d138e4f623b3bbb1024c8e4d172c2b1c0719a12.scope - libcontainer container 27db5926942211e02da172dd2d138e4f623b3bbb1024c8e4d172c2b1c0719a12. Jan 20 01:48:20.274626 systemd[1]: Started cri-containerd-c885f762d91cfb67fe429fbaedcc24674553182e035ca4156ebc0a71391bd6f3.scope - libcontainer container c885f762d91cfb67fe429fbaedcc24674553182e035ca4156ebc0a71391bd6f3. Jan 20 01:48:20.390224 kubelet[2734]: I0120 01:48:20.390181 2734 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:48:20.451000 audit: BPF prog-id=81 op=LOAD Jan 20 01:48:20.458079 kubelet[2734]: E0120 01:48:20.443028 2734 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.44:6443/api/v1/nodes\": dial tcp 10.0.0.44:6443: connect: connection refused" node="localhost" Jan 20 01:48:20.478000 audit: BPF prog-id=82 op=LOAD Jan 20 01:48:20.478000 audit[2829]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2787 pid=2829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:20.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033313238396262333263353534306361643532666637306238353736 Jan 20 01:48:20.478000 audit: BPF prog-id=82 op=UNLOAD Jan 20 01:48:20.478000 audit[2829]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2787 pid=2829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:20.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033313238396262333263353534306361643532666637306238353736 Jan 20 01:48:20.479000 audit: BPF prog-id=83 op=LOAD Jan 20 01:48:20.479000 audit[2829]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2787 pid=2829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:20.479000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033313238396262333263353534306361643532666637306238353736 Jan 20 01:48:20.479000 audit: BPF prog-id=84 op=LOAD Jan 20 01:48:20.479000 audit[2829]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2787 pid=2829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:20.479000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033313238396262333263353534306361643532666637306238353736 Jan 20 01:48:20.479000 audit: BPF prog-id=84 op=UNLOAD Jan 20 01:48:20.479000 audit[2829]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2787 pid=2829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:20.479000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033313238396262333263353534306361643532666637306238353736 Jan 20 01:48:20.479000 audit: BPF prog-id=83 op=UNLOAD Jan 20 01:48:20.479000 audit[2829]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2787 pid=2829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:20.479000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033313238396262333263353534306361643532666637306238353736 Jan 20 01:48:20.480000 audit: BPF prog-id=85 op=LOAD Jan 20 01:48:20.480000 audit[2829]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2787 pid=2829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:20.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033313238396262333263353534306361643532666637306238353736 Jan 20 01:48:20.555413 kubelet[2734]: E0120 01:48:20.534408 2734 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 01:48:20.601189 kubelet[2734]: E0120 01:48:20.600632 2734 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="3.2s" Jan 20 01:48:20.602000 audit: BPF prog-id=86 op=LOAD Jan 20 01:48:20.617000 audit: BPF prog-id=87 op=LOAD Jan 20 01:48:20.617000 audit[2839]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2811 pid=2839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:20.617000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237646235393236393432323131653032646131373264643264313338 Jan 20 01:48:20.623000 audit: BPF prog-id=87 op=UNLOAD Jan 20 01:48:20.623000 audit[2839]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2811 pid=2839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:20.623000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237646235393236393432323131653032646131373264643264313338 Jan 20 01:48:20.623000 audit: BPF prog-id=88 op=LOAD Jan 20 01:48:20.623000 audit[2839]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2811 pid=2839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:20.623000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237646235393236393432323131653032646131373264643264313338 Jan 20 01:48:20.623000 audit: BPF prog-id=89 op=LOAD Jan 20 01:48:20.623000 audit[2839]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2811 pid=2839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:20.623000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237646235393236393432323131653032646131373264643264313338 Jan 20 01:48:20.623000 audit: BPF prog-id=89 op=UNLOAD Jan 20 01:48:20.623000 audit[2839]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2811 pid=2839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:20.623000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237646235393236393432323131653032646131373264643264313338 Jan 20 01:48:20.623000 audit: BPF prog-id=88 op=UNLOAD Jan 20 01:48:20.623000 audit[2839]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2811 pid=2839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:20.623000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237646235393236393432323131653032646131373264643264313338 Jan 20 01:48:20.623000 audit: BPF prog-id=90 op=LOAD Jan 20 01:48:20.623000 audit[2839]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2811 pid=2839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:20.623000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237646235393236393432323131653032646131373264643264313338 Jan 20 01:48:20.696000 audit: BPF prog-id=91 op=LOAD Jan 20 01:48:20.700000 audit: BPF prog-id=92 op=LOAD Jan 20 01:48:20.700000 audit[2840]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=2800 pid=2840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:20.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338383566373632643931636662363766653432396662616564636332 Jan 20 01:48:20.700000 audit: BPF prog-id=92 op=UNLOAD Jan 20 01:48:20.700000 audit[2840]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2800 pid=2840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:20.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338383566373632643931636662363766653432396662616564636332 Jan 20 01:48:20.711000 audit: BPF prog-id=93 op=LOAD Jan 20 01:48:20.711000 audit[2840]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=2800 pid=2840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:20.711000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338383566373632643931636662363766653432396662616564636332 Jan 20 01:48:20.711000 audit: BPF prog-id=94 op=LOAD Jan 20 01:48:20.711000 audit[2840]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=2800 pid=2840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:20.711000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338383566373632643931636662363766653432396662616564636332 Jan 20 01:48:20.712000 audit: BPF prog-id=94 op=UNLOAD Jan 20 01:48:20.712000 audit[2840]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2800 pid=2840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:20.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338383566373632643931636662363766653432396662616564636332 Jan 20 01:48:20.712000 audit: BPF prog-id=93 op=UNLOAD Jan 20 01:48:20.712000 audit[2840]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2800 pid=2840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:20.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338383566373632643931636662363766653432396662616564636332 Jan 20 01:48:20.712000 audit: BPF prog-id=95 op=LOAD Jan 20 01:48:20.712000 audit[2840]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=2800 pid=2840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:20.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338383566373632643931636662363766653432396662616564636332 Jan 20 01:48:20.772872 kubelet[2734]: E0120 01:48:20.772143 2734 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 01:48:21.052759 kubelet[2734]: E0120 01:48:21.052094 2734 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 01:48:21.110740 containerd[1643]: time="2026-01-20T01:48:21.103581261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"031289bb32c5540cad52ff70b85767c3a2b55e3d17aad2550b4d749f716f9e7a\"" Jan 20 01:48:21.126826 kubelet[2734]: E0120 01:48:21.126769 2734 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:21.168326 containerd[1643]: time="2026-01-20T01:48:21.161147884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a23d1c402d2fa864b27c69acb395c2a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"27db5926942211e02da172dd2d138e4f623b3bbb1024c8e4d172c2b1c0719a12\"" Jan 20 01:48:21.171745 kubelet[2734]: E0120 01:48:21.170662 2734 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:21.204760 containerd[1643]: time="2026-01-20T01:48:21.204503356Z" level=info msg="CreateContainer within sandbox \"031289bb32c5540cad52ff70b85767c3a2b55e3d17aad2550b4d749f716f9e7a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 01:48:21.235940 containerd[1643]: time="2026-01-20T01:48:21.232165517Z" level=info msg="CreateContainer within sandbox \"27db5926942211e02da172dd2d138e4f623b3bbb1024c8e4d172c2b1c0719a12\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 01:48:21.248629 containerd[1643]: time="2026-01-20T01:48:21.247907967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c885f762d91cfb67fe429fbaedcc24674553182e035ca4156ebc0a71391bd6f3\"" Jan 20 01:48:21.250567 kubelet[2734]: E0120 01:48:21.249993 2734 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:21.273881 containerd[1643]: time="2026-01-20T01:48:21.273819401Z" level=info msg="CreateContainer within sandbox \"c885f762d91cfb67fe429fbaedcc24674553182e035ca4156ebc0a71391bd6f3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 01:48:21.385792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3233109485.mount: Deactivated successfully. Jan 20 01:48:21.396759 containerd[1643]: time="2026-01-20T01:48:21.394822839Z" level=info msg="Container 5f3e905fa78b0f7594c614914c72be4ee64f38b1c5e2cd99718fbf94428e1c43: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:48:21.432280 containerd[1643]: time="2026-01-20T01:48:21.429466497Z" level=info msg="Container c70168fc55c1b265f2f30a01dd07fd0e5f7160685bbb475b77bf356704e226ad: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:48:21.452644 containerd[1643]: time="2026-01-20T01:48:21.452581909Z" level=info msg="Container 63a141b94f3536e20f0d4e8a1fa3bc37fd34cb9ecb2cbfaedbb664f4f2d7d907: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:48:21.489846 containerd[1643]: time="2026-01-20T01:48:21.489782208Z" level=info msg="CreateContainer within sandbox \"031289bb32c5540cad52ff70b85767c3a2b55e3d17aad2550b4d749f716f9e7a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5f3e905fa78b0f7594c614914c72be4ee64f38b1c5e2cd99718fbf94428e1c43\"" Jan 20 01:48:21.500795 containerd[1643]: time="2026-01-20T01:48:21.500582065Z" level=info msg="StartContainer for \"5f3e905fa78b0f7594c614914c72be4ee64f38b1c5e2cd99718fbf94428e1c43\"" Jan 20 01:48:21.511380 containerd[1643]: time="2026-01-20T01:48:21.511327272Z" level=info msg="connecting to shim 5f3e905fa78b0f7594c614914c72be4ee64f38b1c5e2cd99718fbf94428e1c43" address="unix:///run/containerd/s/adbcef3ee3c5a6ebf35e7d50672629f7bc553c77f0fef6dfe0dc49de4b8e5235" protocol=ttrpc version=3 Jan 20 01:48:21.545853 containerd[1643]: time="2026-01-20T01:48:21.545791494Z" level=info msg="CreateContainer within sandbox \"c885f762d91cfb67fe429fbaedcc24674553182e035ca4156ebc0a71391bd6f3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"63a141b94f3536e20f0d4e8a1fa3bc37fd34cb9ecb2cbfaedbb664f4f2d7d907\"" Jan 20 01:48:21.556785 containerd[1643]: time="2026-01-20T01:48:21.553496676Z" level=info msg="CreateContainer within sandbox \"27db5926942211e02da172dd2d138e4f623b3bbb1024c8e4d172c2b1c0719a12\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c70168fc55c1b265f2f30a01dd07fd0e5f7160685bbb475b77bf356704e226ad\"" Jan 20 01:48:21.565391 containerd[1643]: time="2026-01-20T01:48:21.562505300Z" level=info msg="StartContainer for \"c70168fc55c1b265f2f30a01dd07fd0e5f7160685bbb475b77bf356704e226ad\"" Jan 20 01:48:21.573345 containerd[1643]: time="2026-01-20T01:48:21.565220446Z" level=info msg="StartContainer for \"63a141b94f3536e20f0d4e8a1fa3bc37fd34cb9ecb2cbfaedbb664f4f2d7d907\"" Jan 20 01:48:21.576246 containerd[1643]: time="2026-01-20T01:48:21.575896579Z" level=info msg="connecting to shim c70168fc55c1b265f2f30a01dd07fd0e5f7160685bbb475b77bf356704e226ad" address="unix:///run/containerd/s/1b4b994f60f5f41dac22e31393bf5ef29764d0520f00532969a72a584aec2f98" protocol=ttrpc version=3 Jan 20 01:48:21.584755 containerd[1643]: time="2026-01-20T01:48:21.580475937Z" level=info msg="connecting to shim 63a141b94f3536e20f0d4e8a1fa3bc37fd34cb9ecb2cbfaedbb664f4f2d7d907" address="unix:///run/containerd/s/4aea575429a2cd7e5f9f767a35b5e4a00b86ff3e051b43713b2194618990aec5" protocol=ttrpc version=3 Jan 20 01:48:21.750114 systemd[1]: Started cri-containerd-5f3e905fa78b0f7594c614914c72be4ee64f38b1c5e2cd99718fbf94428e1c43.scope - libcontainer container 5f3e905fa78b0f7594c614914c72be4ee64f38b1c5e2cd99718fbf94428e1c43. Jan 20 01:48:21.769794 kubelet[2734]: E0120 01:48:21.769395 2734 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 01:48:21.771133 systemd[1]: Started cri-containerd-c70168fc55c1b265f2f30a01dd07fd0e5f7160685bbb475b77bf356704e226ad.scope - libcontainer container c70168fc55c1b265f2f30a01dd07fd0e5f7160685bbb475b77bf356704e226ad. Jan 20 01:48:21.996356 systemd[1]: Started cri-containerd-63a141b94f3536e20f0d4e8a1fa3bc37fd34cb9ecb2cbfaedbb664f4f2d7d907.scope - libcontainer container 63a141b94f3536e20f0d4e8a1fa3bc37fd34cb9ecb2cbfaedbb664f4f2d7d907. Jan 20 01:48:22.020000 audit: BPF prog-id=96 op=LOAD Jan 20 01:48:22.034000 audit: BPF prog-id=97 op=LOAD Jan 20 01:48:22.034000 audit[2918]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000140238 a2=98 a3=0 items=0 ppid=2787 pid=2918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:22.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566336539303566613738623066373539346336313439313463373262 Jan 20 01:48:22.046000 audit: BPF prog-id=97 op=UNLOAD Jan 20 01:48:22.046000 audit[2918]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2787 pid=2918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:22.046000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566336539303566613738623066373539346336313439313463373262 Jan 20 01:48:22.051000 audit: BPF prog-id=98 op=LOAD Jan 20 01:48:22.061000 audit: BPF prog-id=99 op=LOAD Jan 20 01:48:22.061000 audit[2923]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2811 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:22.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337303136386663353563316232363566326633306130316464303766 Jan 20 01:48:22.061000 audit: BPF prog-id=99 op=UNLOAD Jan 20 01:48:22.061000 audit[2923]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2811 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:22.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337303136386663353563316232363566326633306130316464303766 Jan 20 01:48:22.061000 audit: BPF prog-id=100 op=LOAD Jan 20 01:48:22.061000 audit[2923]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2811 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:22.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337303136386663353563316232363566326633306130316464303766 Jan 20 01:48:22.067000 audit: BPF prog-id=101 op=LOAD Jan 20 01:48:22.067000 audit[2923]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2811 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:22.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337303136386663353563316232363566326633306130316464303766 Jan 20 01:48:22.067000 audit: BPF prog-id=101 op=UNLOAD Jan 20 01:48:22.067000 audit[2923]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2811 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:22.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337303136386663353563316232363566326633306130316464303766 Jan 20 01:48:22.067000 audit: BPF prog-id=100 op=UNLOAD Jan 20 01:48:22.067000 audit[2923]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2811 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:22.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337303136386663353563316232363566326633306130316464303766 Jan 20 01:48:22.067000 audit: BPF prog-id=102 op=LOAD Jan 20 01:48:22.067000 audit[2923]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2811 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:22.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337303136386663353563316232363566326633306130316464303766 Jan 20 01:48:22.067000 audit: BPF prog-id=103 op=LOAD Jan 20 01:48:22.067000 audit[2918]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000140488 a2=98 a3=0 items=0 ppid=2787 pid=2918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:22.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566336539303566613738623066373539346336313439313463373262 Jan 20 01:48:22.067000 audit: BPF prog-id=104 op=LOAD Jan 20 01:48:22.067000 audit[2918]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000140218 a2=98 a3=0 items=0 ppid=2787 pid=2918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:22.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566336539303566613738623066373539346336313439313463373262 Jan 20 01:48:22.067000 audit: BPF prog-id=104 op=UNLOAD Jan 20 01:48:22.067000 audit[2918]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2787 pid=2918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:22.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566336539303566613738623066373539346336313439313463373262 Jan 20 01:48:22.067000 audit: BPF prog-id=103 op=UNLOAD Jan 20 01:48:22.067000 audit[2918]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2787 pid=2918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:22.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566336539303566613738623066373539346336313439313463373262 Jan 20 01:48:22.067000 audit: BPF prog-id=105 op=LOAD Jan 20 01:48:22.067000 audit[2918]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001406e8 a2=98 a3=0 items=0 ppid=2787 pid=2918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:22.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566336539303566613738623066373539346336313439313463373262 Jan 20 01:48:22.137332 kubelet[2734]: I0120 01:48:22.057506 2734 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:48:22.137332 kubelet[2734]: E0120 01:48:22.070266 2734 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.44:6443/api/v1/nodes\": dial tcp 10.0.0.44:6443: connect: connection refused" node="localhost" Jan 20 01:48:22.255809 kernel: kauditd_printk_skb: 136 callbacks suppressed Jan 20 01:48:22.255909 kernel: audit: type=1334 audit(1768873702.229:402): prog-id=106 op=LOAD Jan 20 01:48:22.229000 audit: BPF prog-id=106 op=LOAD Jan 20 01:48:22.282453 kernel: audit: type=1334 audit(1768873702.243:403): prog-id=107 op=LOAD Jan 20 01:48:22.243000 audit: BPF prog-id=107 op=LOAD Jan 20 01:48:22.243000 audit[2930]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0000c6238 a2=98 a3=0 items=0 ppid=2800 pid=2930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:22.326958 kubelet[2734]: E0120 01:48:22.318417 2734 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.44:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 01:48:22.335871 kernel: audit: type=1300 audit(1768873702.243:403): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0000c6238 a2=98 a3=0 items=0 ppid=2800 pid=2930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:22.243000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633613134316239346633353336653230663064346538613166613362 Jan 20 01:48:22.443455 kernel: audit: type=1327 audit(1768873702.243:403): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633613134316239346633353336653230663064346538613166613362 Jan 20 01:48:22.443621 kernel: audit: type=1334 audit(1768873702.243:404): prog-id=107 op=UNLOAD Jan 20 01:48:22.443657 kernel: audit: type=1300 audit(1768873702.243:404): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2800 pid=2930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:22.243000 audit: BPF prog-id=107 op=UNLOAD Jan 20 01:48:22.243000 audit[2930]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2800 pid=2930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:22.510611 kernel: audit: type=1327 audit(1768873702.243:404): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633613134316239346633353336653230663064346538613166613362 Jan 20 01:48:22.243000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633613134316239346633353336653230663064346538613166613362 Jan 20 01:48:22.596556 kernel: audit: type=1334 audit(1768873702.243:405): prog-id=108 op=LOAD Jan 20 01:48:22.243000 audit: BPF prog-id=108 op=LOAD Jan 20 01:48:22.615270 kernel: audit: type=1300 audit(1768873702.243:405): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0000c6488 a2=98 a3=0 items=0 ppid=2800 pid=2930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:22.243000 audit[2930]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0000c6488 a2=98 a3=0 items=0 ppid=2800 pid=2930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:22.722655 kernel: audit: type=1327 audit(1768873702.243:405): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633613134316239346633353336653230663064346538613166613362 Jan 20 01:48:22.243000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633613134316239346633353336653230663064346538613166613362 Jan 20 01:48:22.243000 audit: BPF prog-id=109 op=LOAD Jan 20 01:48:22.243000 audit[2930]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0000c6218 a2=98 a3=0 items=0 ppid=2800 pid=2930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:22.243000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633613134316239346633353336653230663064346538613166613362 Jan 20 01:48:22.243000 audit: BPF prog-id=109 op=UNLOAD Jan 20 01:48:22.243000 audit[2930]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2800 pid=2930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:22.243000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633613134316239346633353336653230663064346538613166613362 Jan 20 01:48:22.243000 audit: BPF prog-id=108 op=UNLOAD Jan 20 01:48:22.243000 audit[2930]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2800 pid=2930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:22.243000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633613134316239346633353336653230663064346538613166613362 Jan 20 01:48:22.243000 audit: BPF prog-id=110 op=LOAD Jan 20 01:48:22.243000 audit[2930]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0000c66e8 a2=98 a3=0 items=0 ppid=2800 pid=2930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:22.243000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633613134316239346633353336653230663064346538613166613362 Jan 20 01:48:22.836550 containerd[1643]: time="2026-01-20T01:48:22.833606511Z" level=info msg="StartContainer for \"c70168fc55c1b265f2f30a01dd07fd0e5f7160685bbb475b77bf356704e226ad\" returns successfully" Jan 20 01:48:22.934155 containerd[1643]: time="2026-01-20T01:48:22.930029201Z" level=info msg="StartContainer for \"63a141b94f3536e20f0d4e8a1fa3bc37fd34cb9ecb2cbfaedbb664f4f2d7d907\" returns successfully" Jan 20 01:48:23.009029 containerd[1643]: time="2026-01-20T01:48:23.003795780Z" level=info msg="StartContainer for \"5f3e905fa78b0f7594c614914c72be4ee64f38b1c5e2cd99718fbf94428e1c43\" returns successfully" Jan 20 01:48:23.644651 kubelet[2734]: E0120 01:48:23.639882 2734 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:48:23.644651 kubelet[2734]: E0120 01:48:23.640195 2734 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:23.658836 kubelet[2734]: E0120 01:48:23.656323 2734 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:48:23.658836 kubelet[2734]: E0120 01:48:23.656566 2734 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:23.694083 kubelet[2734]: E0120 01:48:23.690429 2734 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:48:23.694083 kubelet[2734]: E0120 01:48:23.690631 2734 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:24.732302 kubelet[2734]: E0120 01:48:24.729382 2734 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:48:24.732302 kubelet[2734]: E0120 01:48:24.729587 2734 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:24.732302 kubelet[2734]: E0120 01:48:24.730089 2734 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:48:24.735842 kubelet[2734]: E0120 01:48:24.734510 2734 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:24.735842 kubelet[2734]: E0120 01:48:24.735010 2734 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:48:24.751216 kubelet[2734]: E0120 01:48:24.749778 2734 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:25.282211 kubelet[2734]: I0120 01:48:25.277942 2734 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:48:29.857911 kubelet[2734]: E0120 01:48:29.847293 2734 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:48:30.197462 kubelet[2734]: E0120 01:48:30.175117 2734 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:48:30.214413 kubelet[2734]: E0120 01:48:30.208346 2734 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:30.248287 kubelet[2734]: E0120 01:48:30.200947 2734 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:48:30.248287 kubelet[2734]: E0120 01:48:30.244942 2734 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:31.247829 kubelet[2734]: E0120 01:48:31.246644 2734 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:48:31.274198 kubelet[2734]: E0120 01:48:31.254641 2734 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:31.274198 kubelet[2734]: E0120 01:48:31.267142 2734 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:48:31.277776 kubelet[2734]: E0120 01:48:31.275996 2734 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:31.289182 kubelet[2734]: E0120 01:48:31.287138 2734 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:48:31.289182 kubelet[2734]: E0120 01:48:31.287367 2734 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:34.025819 kubelet[2734]: E0120 01:48:34.020140 2734 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Jan 20 01:48:34.274525 kubelet[2734]: E0120 01:48:34.274435 2734 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 01:48:34.901065 kubelet[2734]: E0120 01:48:34.900541 2734 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 01:48:35.569903 kubelet[2734]: E0120 01:48:35.569392 2734 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.44:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 20 01:48:39.859967 kubelet[2734]: E0120 01:48:39.850328 2734 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 01:48:39.887822 kubelet[2734]: E0120 01:48:39.887458 2734 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:48:39.894525 kubelet[2734]: E0120 01:48:39.894245 2734 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.44:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.188c4d413e5b6ca5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:48:16.264154277 +0000 UTC m=+2.911408466,LastTimestamp:2026-01-20 01:48:16.264154277 +0000 UTC m=+2.911408466,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:48:39.967951 kubelet[2734]: E0120 01:48:39.889001 2734 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 01:48:41.152513 kubelet[2734]: E0120 01:48:41.149581 2734 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.44:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 01:48:41.342897 kubelet[2734]: E0120 01:48:41.337890 2734 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:48:41.346489 kubelet[2734]: E0120 01:48:41.345824 2734 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:43.478589 kubelet[2734]: I0120 01:48:43.478224 2734 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:48:50.462435 kubelet[2734]: E0120 01:48:50.459400 2734 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Jan 20 01:48:50.462435 kubelet[2734]: E0120 01:48:50.460889 2734 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:48:53.540457 kubelet[2734]: E0120 01:48:53.540352 2734 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.44:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 20 01:48:54.030572 kubelet[2734]: E0120 01:48:54.029278 2734 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 01:48:55.487754 kubelet[2734]: I0120 01:48:55.485951 2734 apiserver.go:52] "Watching apiserver" Jan 20 01:48:55.597112 kubelet[2734]: I0120 01:48:55.596959 2734 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 01:48:57.961981 kubelet[2734]: E0120 01:48:57.958366 2734 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 20 01:48:58.071998 kubelet[2734]: E0120 01:48:57.973901 2734 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188c4d413e5b6ca5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:48:16.264154277 +0000 UTC m=+2.911408466,LastTimestamp:2026-01-20 01:48:16.264154277 +0000 UTC m=+2.911408466,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:48:58.071998 kubelet[2734]: E0120 01:48:58.007013 2734 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 20 01:48:58.137033 kubelet[2734]: E0120 01:48:58.136049 2734 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:48:58.137033 kubelet[2734]: E0120 01:48:58.136416 2734 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:49:01.382619 kubelet[2734]: E0120 01:49:01.378105 2734 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:49:01.534586 kubelet[2734]: E0120 01:49:01.512181 2734 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188c4d41672e748f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:48:16.949073039 +0000 UTC m=+3.596327229,LastTimestamp:2026-01-20 01:48:16.949073039 +0000 UTC m=+3.596327229,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:49:01.577869 kubelet[2734]: I0120 01:49:01.577784 2734 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:49:01.679506 kubelet[2734]: E0120 01:49:01.679262 2734 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.234s" Jan 20 01:49:05.727229 kubelet[2734]: E0120 01:49:05.696588 2734 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.07s" Jan 20 01:49:06.156596 kubelet[2734]: I0120 01:49:06.121299 2734 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 01:49:06.156596 kubelet[2734]: E0120 01:49:06.128652 2734 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 20 01:49:06.182157 kubelet[2734]: I0120 01:49:06.182098 2734 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 01:49:07.984178 kubelet[2734]: E0120 01:49:07.981412 2734 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.8s" Jan 20 01:49:08.297992 kubelet[2734]: I0120 01:49:08.297127 2734 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 01:49:08.858177 kubelet[2734]: E0120 01:49:08.858122 2734 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:49:08.860495 kubelet[2734]: E0120 01:49:08.860467 2734 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:49:08.882788 kubelet[2734]: I0120 01:49:08.882747 2734 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 01:49:09.159506 kubelet[2734]: E0120 01:49:09.151639 2734 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:49:18.730185 kubelet[2734]: I0120 01:49:18.729844 2734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=10.729796162 podStartE2EDuration="10.729796162s" podCreationTimestamp="2026-01-20 01:49:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:49:18.729081624 +0000 UTC m=+65.376335854" watchObservedRunningTime="2026-01-20 01:49:18.729796162 +0000 UTC m=+65.377050370" Jan 20 01:49:19.139196 kubelet[2734]: I0120 01:49:19.137595 2734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=11.137536035 podStartE2EDuration="11.137536035s" podCreationTimestamp="2026-01-20 01:49:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:49:18.880093949 +0000 UTC m=+65.527348137" watchObservedRunningTime="2026-01-20 01:49:19.137536035 +0000 UTC m=+65.784790225" Jan 20 01:49:36.939520 systemd[1]: Reload requested from client PID 3032 ('systemctl') (unit session-9.scope)... Jan 20 01:49:36.944867 systemd[1]: Reloading... Jan 20 01:49:38.414953 zram_generator::config[3081]: No configuration found. Jan 20 01:49:38.470904 kubelet[2734]: E0120 01:49:38.469095 2734 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:49:38.703541 kubelet[2734]: I0120 01:49:38.701861 2734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=30.701840914 podStartE2EDuration="30.701840914s" podCreationTimestamp="2026-01-20 01:49:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:49:19.161333217 +0000 UTC m=+65.808587406" watchObservedRunningTime="2026-01-20 01:49:38.701840914 +0000 UTC m=+85.349095123" Jan 20 01:49:39.203273 kubelet[2734]: E0120 01:49:39.201121 2734 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:49:41.609663 systemd[1]: Reloading finished in 4660 ms. Jan 20 01:49:41.765024 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:49:41.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:49:41.818629 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 01:49:41.819382 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:49:41.821984 systemd[1]: kubelet.service: Consumed 12.335s CPU time, 134.7M memory peak. Jan 20 01:49:41.912908 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 20 01:49:41.913134 kernel: audit: type=1131 audit(1768873781.818:410): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:49:41.837282 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:49:41.918541 kernel: audit: type=1334 audit(1768873781.835:411): prog-id=111 op=LOAD Jan 20 01:49:41.918641 kernel: audit: type=1334 audit(1768873781.835:412): prog-id=77 op=UNLOAD Jan 20 01:49:41.918816 kernel: audit: type=1334 audit(1768873781.835:413): prog-id=112 op=LOAD Jan 20 01:49:41.918872 kernel: audit: type=1334 audit(1768873781.835:414): prog-id=113 op=LOAD Jan 20 01:49:41.918909 kernel: audit: type=1334 audit(1768873781.835:415): prog-id=78 op=UNLOAD Jan 20 01:49:41.918958 kernel: audit: type=1334 audit(1768873781.835:416): prog-id=79 op=UNLOAD Jan 20 01:49:41.921263 kernel: audit: type=1334 audit(1768873781.835:417): prog-id=114 op=LOAD Jan 20 01:49:41.921322 kernel: audit: type=1334 audit(1768873781.835:418): prog-id=80 op=UNLOAD Jan 20 01:49:41.921372 kernel: audit: type=1334 audit(1768873781.835:419): prog-id=115 op=LOAD Jan 20 01:49:41.835000 audit: BPF prog-id=111 op=LOAD Jan 20 01:49:41.835000 audit: BPF prog-id=77 op=UNLOAD Jan 20 01:49:41.835000 audit: BPF prog-id=112 op=LOAD Jan 20 01:49:41.835000 audit: BPF prog-id=113 op=LOAD Jan 20 01:49:41.835000 audit: BPF prog-id=78 op=UNLOAD Jan 20 01:49:41.835000 audit: BPF prog-id=79 op=UNLOAD Jan 20 01:49:41.835000 audit: BPF prog-id=114 op=LOAD Jan 20 01:49:41.835000 audit: BPF prog-id=80 op=UNLOAD Jan 20 01:49:41.835000 audit: BPF prog-id=115 op=LOAD Jan 20 01:49:41.835000 audit: BPF prog-id=61 op=UNLOAD Jan 20 01:49:41.850000 audit: BPF prog-id=116 op=LOAD Jan 20 01:49:41.850000 audit: BPF prog-id=117 op=LOAD Jan 20 01:49:41.850000 audit: BPF prog-id=66 op=UNLOAD Jan 20 01:49:41.850000 audit: BPF prog-id=67 op=UNLOAD Jan 20 01:49:41.861000 audit: BPF prog-id=118 op=LOAD Jan 20 01:49:41.861000 audit: BPF prog-id=71 op=UNLOAD Jan 20 01:49:41.861000 audit: BPF prog-id=119 op=LOAD Jan 20 01:49:41.861000 audit: BPF prog-id=120 op=LOAD Jan 20 01:49:41.861000 audit: BPF prog-id=72 op=UNLOAD Jan 20 01:49:41.861000 audit: BPF prog-id=73 op=UNLOAD Jan 20 01:49:41.869000 audit: BPF prog-id=121 op=LOAD Jan 20 01:49:41.869000 audit: BPF prog-id=65 op=UNLOAD Jan 20 01:49:41.884000 audit: BPF prog-id=122 op=LOAD Jan 20 01:49:41.884000 audit: BPF prog-id=74 op=UNLOAD Jan 20 01:49:41.884000 audit: BPF prog-id=123 op=LOAD Jan 20 01:49:41.884000 audit: BPF prog-id=124 op=LOAD Jan 20 01:49:41.884000 audit: BPF prog-id=75 op=UNLOAD Jan 20 01:49:41.884000 audit: BPF prog-id=76 op=UNLOAD Jan 20 01:49:41.884000 audit: BPF prog-id=125 op=LOAD Jan 20 01:49:41.884000 audit: BPF prog-id=68 op=UNLOAD Jan 20 01:49:41.884000 audit: BPF prog-id=126 op=LOAD Jan 20 01:49:41.884000 audit: BPF prog-id=127 op=LOAD Jan 20 01:49:41.884000 audit: BPF prog-id=69 op=UNLOAD Jan 20 01:49:41.884000 audit: BPF prog-id=70 op=UNLOAD Jan 20 01:49:41.894000 audit: BPF prog-id=128 op=LOAD Jan 20 01:49:41.894000 audit: BPF prog-id=62 op=UNLOAD Jan 20 01:49:41.894000 audit: BPF prog-id=129 op=LOAD Jan 20 01:49:41.894000 audit: BPF prog-id=130 op=LOAD Jan 20 01:49:41.894000 audit: BPF prog-id=63 op=UNLOAD Jan 20 01:49:41.894000 audit: BPF prog-id=64 op=UNLOAD Jan 20 01:49:45.061299 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:49:45.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:49:45.227958 (kubelet)[3123]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 01:49:46.235628 kubelet[3123]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:49:46.235628 kubelet[3123]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 01:49:46.235628 kubelet[3123]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:49:46.236406 kubelet[3123]: I0120 01:49:46.235795 3123 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 01:49:46.355280 kubelet[3123]: I0120 01:49:46.351529 3123 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 20 01:49:46.355280 kubelet[3123]: I0120 01:49:46.351573 3123 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 01:49:46.355280 kubelet[3123]: I0120 01:49:46.354015 3123 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 01:49:46.360518 kubelet[3123]: I0120 01:49:46.359982 3123 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 20 01:49:46.381173 kubelet[3123]: I0120 01:49:46.378481 3123 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 01:49:46.651094 kubelet[3123]: I0120 01:49:46.644245 3123 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 01:49:46.801420 kubelet[3123]: I0120 01:49:46.727574 3123 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 01:49:46.801420 kubelet[3123]: I0120 01:49:46.730139 3123 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 01:49:46.801420 kubelet[3123]: I0120 01:49:46.730240 3123 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 01:49:46.801420 kubelet[3123]: I0120 01:49:46.730568 3123 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 01:49:46.802343 kubelet[3123]: I0120 01:49:46.730585 3123 container_manager_linux.go:303] "Creating device plugin manager" Jan 20 01:49:46.802343 kubelet[3123]: I0120 01:49:46.730962 3123 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:49:46.802343 kubelet[3123]: I0120 01:49:46.731275 3123 kubelet.go:480] "Attempting to sync node with API server" Jan 20 01:49:46.802343 kubelet[3123]: I0120 01:49:46.731296 3123 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 01:49:46.802343 kubelet[3123]: I0120 01:49:46.731818 3123 kubelet.go:386] "Adding apiserver pod source" Jan 20 01:49:46.802343 kubelet[3123]: I0120 01:49:46.732145 3123 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 01:49:46.802343 kubelet[3123]: I0120 01:49:46.739342 3123 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 01:49:46.802343 kubelet[3123]: I0120 01:49:46.740175 3123 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 01:49:46.802343 kubelet[3123]: I0120 01:49:46.783482 3123 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 01:49:46.802343 kubelet[3123]: I0120 01:49:46.783545 3123 server.go:1289] "Started kubelet" Jan 20 01:49:46.802343 kubelet[3123]: I0120 01:49:46.798958 3123 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 01:49:46.841105 kubelet[3123]: I0120 01:49:46.831252 3123 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 01:49:46.841105 kubelet[3123]: I0120 01:49:46.832199 3123 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 01:49:46.841105 kubelet[3123]: I0120 01:49:46.832410 3123 reconciler.go:26] "Reconciler: start to sync state" Jan 20 01:49:46.881093 kubelet[3123]: I0120 01:49:46.843955 3123 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 01:49:46.921550 kubelet[3123]: I0120 01:49:46.912597 3123 server.go:317] "Adding debug handlers to kubelet server" Jan 20 01:49:46.977163 kubelet[3123]: I0120 01:49:46.975563 3123 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 01:49:47.014116 kubelet[3123]: I0120 01:49:46.894549 3123 factory.go:223] Registration of the systemd container factory successfully Jan 20 01:49:47.023257 kubelet[3123]: I0120 01:49:47.021988 3123 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 01:49:47.023257 kubelet[3123]: I0120 01:49:46.879660 3123 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 01:49:47.081297 kubelet[3123]: I0120 01:49:47.062498 3123 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 01:49:47.164126 kubelet[3123]: I0120 01:49:47.159851 3123 factory.go:223] Registration of the containerd container factory successfully Jan 20 01:49:47.206398 kubelet[3123]: E0120 01:49:47.163012 3123 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 01:49:47.728528 kubelet[3123]: I0120 01:49:47.724654 3123 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 20 01:49:47.750016 kubelet[3123]: I0120 01:49:47.744295 3123 apiserver.go:52] "Watching apiserver" Jan 20 01:49:47.768634 kubelet[3123]: I0120 01:49:47.766369 3123 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 20 01:49:47.768634 kubelet[3123]: I0120 01:49:47.766411 3123 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 20 01:49:47.768634 kubelet[3123]: I0120 01:49:47.766454 3123 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 01:49:47.768634 kubelet[3123]: I0120 01:49:47.766470 3123 kubelet.go:2436] "Starting kubelet main sync loop" Jan 20 01:49:47.768634 kubelet[3123]: E0120 01:49:47.766547 3123 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 01:49:47.895598 kubelet[3123]: E0120 01:49:47.895435 3123 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 01:49:48.031919 kubelet[3123]: I0120 01:49:48.028481 3123 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 01:49:48.035921 kubelet[3123]: I0120 01:49:48.032091 3123 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 01:49:48.035921 kubelet[3123]: I0120 01:49:48.035854 3123 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:49:48.043313 kubelet[3123]: I0120 01:49:48.038164 3123 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 01:49:48.043313 kubelet[3123]: I0120 01:49:48.043127 3123 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 01:49:48.043313 kubelet[3123]: I0120 01:49:48.043166 3123 policy_none.go:49] "None policy: Start" Jan 20 01:49:48.048398 kubelet[3123]: I0120 01:49:48.043644 3123 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 01:49:48.048398 kubelet[3123]: I0120 01:49:48.046384 3123 state_mem.go:35] "Initializing new in-memory state store" Jan 20 01:49:48.054534 kubelet[3123]: I0120 01:49:48.052143 3123 state_mem.go:75] "Updated machine memory state" Jan 20 01:49:48.098046 kubelet[3123]: E0120 01:49:48.095832 3123 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 01:49:48.323417 kubelet[3123]: E0120 01:49:48.314668 3123 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 01:49:48.323417 kubelet[3123]: I0120 01:49:48.315493 3123 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 01:49:48.323417 kubelet[3123]: I0120 01:49:48.315592 3123 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 01:49:48.343882 kubelet[3123]: I0120 01:49:48.337628 3123 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 01:49:48.378947 kubelet[3123]: I0120 01:49:48.378908 3123 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 01:49:48.407815 containerd[1643]: time="2026-01-20T01:49:48.407405412Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 01:49:48.436326 kubelet[3123]: E0120 01:49:48.434272 3123 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 01:49:48.436811 kubelet[3123]: I0120 01:49:48.436667 3123 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 01:49:48.853362 kubelet[3123]: I0120 01:49:48.770485 3123 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 01:49:48.853362 kubelet[3123]: I0120 01:49:48.771452 3123 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 01:49:48.853362 kubelet[3123]: I0120 01:49:48.839194 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qk6d\" (UniqueName: \"kubernetes.io/projected/2fd6c6fe-2f06-4f6e-bb86-692217979d1b-kube-api-access-5qk6d\") pod \"kube-proxy-m9br7\" (UID: \"2fd6c6fe-2f06-4f6e-bb86-692217979d1b\") " pod="kube-system/kube-proxy-m9br7" Jan 20 01:49:48.853362 kubelet[3123]: I0120 01:49:48.839359 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a23d1c402d2fa864b27c69acb395c2a8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a23d1c402d2fa864b27c69acb395c2a8\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:49:48.853362 kubelet[3123]: I0120 01:49:48.839596 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a23d1c402d2fa864b27c69acb395c2a8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a23d1c402d2fa864b27c69acb395c2a8\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:49:48.853362 kubelet[3123]: I0120 01:49:48.839634 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:49:48.860187 kubelet[3123]: I0120 01:49:48.839664 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:49:48.860187 kubelet[3123]: I0120 01:49:48.839822 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2fd6c6fe-2f06-4f6e-bb86-692217979d1b-kube-proxy\") pod \"kube-proxy-m9br7\" (UID: \"2fd6c6fe-2f06-4f6e-bb86-692217979d1b\") " pod="kube-system/kube-proxy-m9br7" Jan 20 01:49:48.860187 kubelet[3123]: I0120 01:49:48.839927 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2fd6c6fe-2f06-4f6e-bb86-692217979d1b-lib-modules\") pod \"kube-proxy-m9br7\" (UID: \"2fd6c6fe-2f06-4f6e-bb86-692217979d1b\") " pod="kube-system/kube-proxy-m9br7" Jan 20 01:49:48.860187 kubelet[3123]: I0120 01:49:48.840137 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a23d1c402d2fa864b27c69acb395c2a8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a23d1c402d2fa864b27c69acb395c2a8\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:49:48.860187 kubelet[3123]: I0120 01:49:48.840174 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:49:48.860351 kubelet[3123]: I0120 01:49:48.840202 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:49:48.860351 kubelet[3123]: I0120 01:49:48.840467 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:49:48.860351 kubelet[3123]: I0120 01:49:48.840819 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 20 01:49:48.860351 kubelet[3123]: I0120 01:49:48.847194 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2fd6c6fe-2f06-4f6e-bb86-692217979d1b-xtables-lock\") pod \"kube-proxy-m9br7\" (UID: \"2fd6c6fe-2f06-4f6e-bb86-692217979d1b\") " pod="kube-system/kube-proxy-m9br7" Jan 20 01:49:48.935419 systemd[1]: Created slice kubepods-besteffort-pod2fd6c6fe_2f06_4f6e_bb86_692217979d1b.slice - libcontainer container kubepods-besteffort-pod2fd6c6fe_2f06_4f6e_bb86_692217979d1b.slice. Jan 20 01:49:49.145363 kubelet[3123]: I0120 01:49:49.139577 3123 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:49:49.323308 kubelet[3123]: E0120 01:49:49.323260 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:49:49.358100 kubelet[3123]: E0120 01:49:49.358045 3123 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 20 01:49:49.375912 kubelet[3123]: E0120 01:49:49.375865 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:49:49.571508 kubelet[3123]: E0120 01:49:49.565195 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:49:49.571508 kubelet[3123]: I0120 01:49:49.570172 3123 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 20 01:49:49.571508 kubelet[3123]: I0120 01:49:49.570325 3123 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 01:49:49.730805 kubelet[3123]: E0120 01:49:49.728163 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:49:49.730988 containerd[1643]: time="2026-01-20T01:49:49.729843136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m9br7,Uid:2fd6c6fe-2f06-4f6e-bb86-692217979d1b,Namespace:kube-system,Attempt:0,}" Jan 20 01:49:50.230068 kubelet[3123]: E0120 01:49:50.229574 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:49:50.249006 kubelet[3123]: E0120 01:49:50.244305 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:49:50.249006 kubelet[3123]: E0120 01:49:50.246456 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:49:50.256907 containerd[1643]: time="2026-01-20T01:49:50.256617683Z" level=info msg="connecting to shim a77cec3873f66e75b23bf3fc1708f1e973d303346844feae7806517992673522" address="unix:///run/containerd/s/17965282269aebe8094b881df4696ba798ceaa6c93ee33bc33d4082f902c5a58" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:49:51.302338 kubelet[3123]: E0120 01:49:51.300494 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:49:51.302338 kubelet[3123]: E0120 01:49:51.301942 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:49:51.302338 kubelet[3123]: E0120 01:49:51.302409 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:49:51.667281 systemd[1]: Started cri-containerd-a77cec3873f66e75b23bf3fc1708f1e973d303346844feae7806517992673522.scope - libcontainer container a77cec3873f66e75b23bf3fc1708f1e973d303346844feae7806517992673522. Jan 20 01:49:52.908090 kubelet[3123]: E0120 01:49:52.907362 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:49:52.939890 kubelet[3123]: E0120 01:49:52.939522 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:49:53.405453 kubelet[3123]: E0120 01:49:53.404284 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:49:53.778222 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 20 01:49:53.853057 kernel: audit: type=1334 audit(1768873793.441:452): prog-id=131 op=LOAD Jan 20 01:49:53.876087 kernel: audit: type=1334 audit(1768873793.496:453): prog-id=132 op=LOAD Jan 20 01:49:53.876183 kernel: audit: type=1300 audit(1768873793.496:453): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001e8238 a2=98 a3=0 items=0 ppid=3181 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:53.876315 kernel: audit: type=1327 audit(1768873793.496:453): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137376365633338373366363665373562323362663366633137303866 Jan 20 01:49:53.441000 audit: BPF prog-id=131 op=LOAD Jan 20 01:49:53.496000 audit: BPF prog-id=132 op=LOAD Jan 20 01:49:53.496000 audit[3190]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001e8238 a2=98 a3=0 items=0 ppid=3181 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:53.496000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137376365633338373366363665373562323362663366633137303866 Jan 20 01:49:53.496000 audit: BPF prog-id=132 op=UNLOAD Jan 20 01:49:53.959354 kernel: audit: type=1334 audit(1768873793.496:454): prog-id=132 op=UNLOAD Jan 20 01:49:53.496000 audit[3190]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3181 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:54.030424 kubelet[3123]: E0120 01:49:53.999520 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:49:53.496000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137376365633338373366363665373562323362663366633137303866 Jan 20 01:49:54.215892 kernel: audit: type=1300 audit(1768873793.496:454): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3181 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:54.216119 kernel: audit: type=1327 audit(1768873793.496:454): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137376365633338373366363665373562323362663366633137303866 Jan 20 01:49:54.216167 kernel: audit: type=1334 audit(1768873793.496:455): prog-id=133 op=LOAD Jan 20 01:49:53.496000 audit: BPF prog-id=133 op=LOAD Jan 20 01:49:53.496000 audit[3190]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001e8488 a2=98 a3=0 items=0 ppid=3181 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:54.324011 kubelet[3123]: E0120 01:49:54.319905 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:49:54.331922 kernel: audit: type=1300 audit(1768873793.496:455): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001e8488 a2=98 a3=0 items=0 ppid=3181 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:54.332046 kernel: audit: type=1327 audit(1768873793.496:455): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137376365633338373366363665373562323362663366633137303866 Jan 20 01:49:53.496000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137376365633338373366363665373562323362663366633137303866 Jan 20 01:49:54.379290 containerd[1643]: time="2026-01-20T01:49:54.316353449Z" level=error msg="get state for a77cec3873f66e75b23bf3fc1708f1e973d303346844feae7806517992673522" error="context deadline exceeded" Jan 20 01:49:54.386872 containerd[1643]: time="2026-01-20T01:49:54.386556691Z" level=warning msg="unknown status" status=0 Jan 20 01:49:53.496000 audit: BPF prog-id=134 op=LOAD Jan 20 01:49:53.496000 audit[3190]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001e8218 a2=98 a3=0 items=0 ppid=3181 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:53.496000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137376365633338373366363665373562323362663366633137303866 Jan 20 01:49:53.496000 audit: BPF prog-id=134 op=UNLOAD Jan 20 01:49:53.496000 audit[3190]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3181 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:53.496000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137376365633338373366363665373562323362663366633137303866 Jan 20 01:49:53.496000 audit: BPF prog-id=133 op=UNLOAD Jan 20 01:49:53.496000 audit[3190]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3181 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:53.496000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137376365633338373366363665373562323362663366633137303866 Jan 20 01:49:53.496000 audit: BPF prog-id=135 op=LOAD Jan 20 01:49:53.496000 audit[3190]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001e86e8 a2=98 a3=0 items=0 ppid=3181 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:53.496000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137376365633338373366363665373562323362663366633137303866 Jan 20 01:49:54.580814 containerd[1643]: time="2026-01-20T01:49:54.580160734Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 20 01:49:55.925534 containerd[1643]: time="2026-01-20T01:49:55.924460126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m9br7,Uid:2fd6c6fe-2f06-4f6e-bb86-692217979d1b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a77cec3873f66e75b23bf3fc1708f1e973d303346844feae7806517992673522\"" Jan 20 01:49:55.952285 kubelet[3123]: E0120 01:49:55.935927 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:49:56.137224 containerd[1643]: time="2026-01-20T01:49:56.132518351Z" level=info msg="CreateContainer within sandbox \"a77cec3873f66e75b23bf3fc1708f1e973d303346844feae7806517992673522\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 01:49:56.669161 containerd[1643]: time="2026-01-20T01:49:56.668206844Z" level=info msg="Container e33f8bc750e3a4f212a4568507107730692da338ec1fe7ce64e084d5645c77c7: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:49:56.815571 containerd[1643]: time="2026-01-20T01:49:56.815506664Z" level=info msg="CreateContainer within sandbox \"a77cec3873f66e75b23bf3fc1708f1e973d303346844feae7806517992673522\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e33f8bc750e3a4f212a4568507107730692da338ec1fe7ce64e084d5645c77c7\"" Jan 20 01:49:56.846917 containerd[1643]: time="2026-01-20T01:49:56.846363975Z" level=info msg="StartContainer for \"e33f8bc750e3a4f212a4568507107730692da338ec1fe7ce64e084d5645c77c7\"" Jan 20 01:49:57.013867 containerd[1643]: time="2026-01-20T01:49:57.004671219Z" level=info msg="connecting to shim e33f8bc750e3a4f212a4568507107730692da338ec1fe7ce64e084d5645c77c7" address="unix:///run/containerd/s/17965282269aebe8094b881df4696ba798ceaa6c93ee33bc33d4082f902c5a58" protocol=ttrpc version=3 Jan 20 01:49:58.017411 systemd[1]: Started cri-containerd-e33f8bc750e3a4f212a4568507107730692da338ec1fe7ce64e084d5645c77c7.scope - libcontainer container e33f8bc750e3a4f212a4568507107730692da338ec1fe7ce64e084d5645c77c7. Jan 20 01:49:59.057854 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 20 01:49:59.058077 kernel: audit: type=1334 audit(1768873799.008:460): prog-id=136 op=LOAD Jan 20 01:49:59.008000 audit: BPF prog-id=136 op=LOAD Jan 20 01:49:59.064809 kernel: audit: type=1300 audit(1768873799.008:460): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=3181 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:59.008000 audit[3222]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=3181 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:59.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533336638626337353065336134663231326134353638353037313037 Jan 20 01:49:59.253047 kernel: audit: type=1327 audit(1768873799.008:460): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533336638626337353065336134663231326134353638353037313037 Jan 20 01:49:59.253178 kernel: audit: type=1334 audit(1768873799.027:461): prog-id=137 op=LOAD Jan 20 01:49:59.027000 audit: BPF prog-id=137 op=LOAD Jan 20 01:49:59.027000 audit[3222]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=3181 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:59.027000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533336638626337353065336134663231326134353638353037313037 Jan 20 01:49:59.631763 kernel: audit: type=1300 audit(1768873799.027:461): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=3181 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:59.631901 kernel: audit: type=1327 audit(1768873799.027:461): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533336638626337353065336134663231326134353638353037313037 Jan 20 01:49:59.631949 kernel: audit: type=1334 audit(1768873799.038:462): prog-id=137 op=UNLOAD Jan 20 01:49:59.038000 audit: BPF prog-id=137 op=UNLOAD Jan 20 01:49:59.680074 kernel: audit: type=1300 audit(1768873799.038:462): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3181 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:59.038000 audit[3222]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3181 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:59.818376 kernel: audit: type=1327 audit(1768873799.038:462): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533336638626337353065336134663231326134353638353037313037 Jan 20 01:49:59.038000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533336638626337353065336134663231326134353638353037313037 Jan 20 01:49:59.038000 audit: BPF prog-id=136 op=UNLOAD Jan 20 01:49:59.986037 kernel: audit: type=1334 audit(1768873799.038:463): prog-id=136 op=UNLOAD Jan 20 01:49:59.038000 audit[3222]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3181 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:59.038000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533336638626337353065336134663231326134353638353037313037 Jan 20 01:49:59.038000 audit: BPF prog-id=138 op=LOAD Jan 20 01:49:59.038000 audit[3222]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=3181 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:59.038000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533336638626337353065336134663231326134353638353037313037 Jan 20 01:50:00.554149 containerd[1643]: time="2026-01-20T01:50:00.551568916Z" level=info msg="StartContainer for \"e33f8bc750e3a4f212a4568507107730692da338ec1fe7ce64e084d5645c77c7\" returns successfully" Jan 20 01:50:01.201943 kubelet[3123]: E0120 01:50:01.192268 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:50:02.270887 kubelet[3123]: E0120 01:50:02.270483 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:50:03.426813 kubelet[3123]: I0120 01:50:03.415331 3123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m9br7" podStartSLOduration=16.415309775 podStartE2EDuration="16.415309775s" podCreationTimestamp="2026-01-20 01:49:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:50:01.534859888 +0000 UTC m=+15.983308673" watchObservedRunningTime="2026-01-20 01:50:03.415309775 +0000 UTC m=+17.863758480" Jan 20 01:50:03.698927 systemd[1]: Created slice kubepods-besteffort-podb5fcbdbd_cad4_432a_97af_8c4b5df09bc0.slice - libcontainer container kubepods-besteffort-podb5fcbdbd_cad4_432a_97af_8c4b5df09bc0.slice. Jan 20 01:50:03.709792 kubelet[3123]: I0120 01:50:03.702288 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9l5n\" (UniqueName: \"kubernetes.io/projected/b5fcbdbd-cad4-432a-97af-8c4b5df09bc0-kube-api-access-j9l5n\") pod \"tigera-operator-7dcd859c48-dhhdq\" (UID: \"b5fcbdbd-cad4-432a-97af-8c4b5df09bc0\") " pod="tigera-operator/tigera-operator-7dcd859c48-dhhdq" Jan 20 01:50:03.709792 kubelet[3123]: I0120 01:50:03.705099 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b5fcbdbd-cad4-432a-97af-8c4b5df09bc0-var-lib-calico\") pod \"tigera-operator-7dcd859c48-dhhdq\" (UID: \"b5fcbdbd-cad4-432a-97af-8c4b5df09bc0\") " pod="tigera-operator/tigera-operator-7dcd859c48-dhhdq" Jan 20 01:50:04.617851 containerd[1643]: time="2026-01-20T01:50:04.609522386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-dhhdq,Uid:b5fcbdbd-cad4-432a-97af-8c4b5df09bc0,Namespace:tigera-operator,Attempt:0,}" Jan 20 01:50:05.478205 containerd[1643]: time="2026-01-20T01:50:05.478137325Z" level=info msg="connecting to shim eb8e35fa7303fe39aded908b411e4f668a84c2383ceaac749c50b482a095aeaa" address="unix:///run/containerd/s/f46b98591f0a7eb530e13ff451ae5c318baefd5b9b3328df028a2ef6d430f64a" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:50:06.293991 systemd[1]: Started cri-containerd-eb8e35fa7303fe39aded908b411e4f668a84c2383ceaac749c50b482a095aeaa.scope - libcontainer container eb8e35fa7303fe39aded908b411e4f668a84c2383ceaac749c50b482a095aeaa. Jan 20 01:50:06.814000 audit: BPF prog-id=139 op=LOAD Jan 20 01:50:06.871275 kernel: kauditd_printk_skb: 5 callbacks suppressed Jan 20 01:50:06.873890 kernel: audit: type=1334 audit(1768873806.814:465): prog-id=139 op=LOAD Jan 20 01:50:06.900812 kernel: audit: type=1334 audit(1768873806.814:466): prog-id=140 op=LOAD Jan 20 01:50:06.814000 audit: BPF prog-id=140 op=LOAD Jan 20 01:50:06.814000 audit[3288]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3275 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:06.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562386533356661373330336665333961646564393038623431316534 Jan 20 01:50:07.158454 kernel: audit: type=1300 audit(1768873806.814:466): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3275 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:07.158620 kernel: audit: type=1327 audit(1768873806.814:466): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562386533356661373330336665333961646564393038623431316534 Jan 20 01:50:06.814000 audit: BPF prog-id=140 op=UNLOAD Jan 20 01:50:06.814000 audit[3288]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3275 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:07.318671 kernel: audit: type=1334 audit(1768873806.814:467): prog-id=140 op=UNLOAD Jan 20 01:50:07.318921 kernel: audit: type=1300 audit(1768873806.814:467): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3275 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:06.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562386533356661373330336665333961646564393038623431316534 Jan 20 01:50:07.446825 kernel: audit: type=1327 audit(1768873806.814:467): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562386533356661373330336665333961646564393038623431316534 Jan 20 01:50:06.814000 audit: BPF prog-id=141 op=LOAD Jan 20 01:50:07.483888 kernel: audit: type=1334 audit(1768873806.814:468): prog-id=141 op=LOAD Jan 20 01:50:06.814000 audit[3288]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3275 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:06.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562386533356661373330336665333961646564393038623431316534 Jan 20 01:50:07.807982 kernel: audit: type=1300 audit(1768873806.814:468): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3275 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:07.808209 kernel: audit: type=1327 audit(1768873806.814:468): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562386533356661373330336665333961646564393038623431316534 Jan 20 01:50:06.821000 audit: BPF prog-id=142 op=LOAD Jan 20 01:50:06.821000 audit[3288]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3275 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:06.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562386533356661373330336665333961646564393038623431316534 Jan 20 01:50:06.821000 audit: BPF prog-id=142 op=UNLOAD Jan 20 01:50:06.821000 audit[3288]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3275 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:06.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562386533356661373330336665333961646564393038623431316534 Jan 20 01:50:06.821000 audit: BPF prog-id=141 op=UNLOAD Jan 20 01:50:06.821000 audit[3288]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3275 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:06.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562386533356661373330336665333961646564393038623431316534 Jan 20 01:50:06.821000 audit: BPF prog-id=143 op=LOAD Jan 20 01:50:06.821000 audit[3288]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3275 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:06.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562386533356661373330336665333961646564393038623431316534 Jan 20 01:50:08.784496 containerd[1643]: time="2026-01-20T01:50:08.784437860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-dhhdq,Uid:b5fcbdbd-cad4-432a-97af-8c4b5df09bc0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"eb8e35fa7303fe39aded908b411e4f668a84c2383ceaac749c50b482a095aeaa\"" Jan 20 01:50:08.851045 containerd[1643]: time="2026-01-20T01:50:08.847228565Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 20 01:50:08.930000 audit[3346]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3346 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:50:08.930000 audit[3346]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc3f564c10 a2=0 a3=7ffc3f564bfc items=0 ppid=3237 pid=3346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:08.930000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 20 01:50:09.033000 audit[3347]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3347 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:50:09.033000 audit[3347]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffee251fcb0 a2=0 a3=7ffee251fc9c items=0 ppid=3237 pid=3347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:09.033000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 20 01:50:09.039000 audit[3350]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=3350 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:50:09.039000 audit[3350]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffbaf0e970 a2=0 a3=7fffbaf0e95c items=0 ppid=3237 pid=3350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:09.039000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 20 01:50:09.172000 audit[3351]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=3351 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:50:09.172000 audit[3351]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc453bb730 a2=0 a3=7ffc453bb71c items=0 ppid=3237 pid=3351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:09.172000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 20 01:50:09.636000 audit[3353]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3353 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:50:09.636000 audit[3353]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff4ca18070 a2=0 a3=7fff4ca1805c items=0 ppid=3237 pid=3353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:09.636000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 20 01:50:09.809000 audit[3356]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3356 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:50:09.809000 audit[3356]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd5cb957d0 a2=0 a3=7ffd5cb957bc items=0 ppid=3237 pid=3356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:09.809000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 20 01:50:09.844000 audit[3357]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3357 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:50:09.844000 audit[3357]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffdc3723b0 a2=0 a3=7fffdc37239c items=0 ppid=3237 pid=3357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:09.844000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 20 01:50:09.940000 audit[3359]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3359 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:50:09.940000 audit[3359]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe905d6aa0 a2=0 a3=7ffe905d6a8c items=0 ppid=3237 pid=3359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:09.940000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 20 01:50:10.161000 audit[3362]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3362 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:50:10.161000 audit[3362]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe162d8aa0 a2=0 a3=7ffe162d8a8c items=0 ppid=3237 pid=3362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:10.161000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 20 01:50:10.201000 audit[3363]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3363 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:50:10.201000 audit[3363]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf9816930 a2=0 a3=7ffdf981691c items=0 ppid=3237 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:10.201000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 20 01:50:10.445670 update_engine[1625]: I20260120 01:50:10.444257 1625 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 20 01:50:10.451647 update_engine[1625]: I20260120 01:50:10.451597 1625 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 20 01:50:10.470155 update_engine[1625]: I20260120 01:50:10.470095 1625 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 20 01:50:10.463000 audit[3365]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3365 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:50:10.463000 audit[3365]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc61030d20 a2=0 a3=7ffc61030d0c items=0 ppid=3237 pid=3365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:10.463000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 20 01:50:10.491077 update_engine[1625]: I20260120 01:50:10.491027 1625 omaha_request_params.cc:62] Current group set to beta Jan 20 01:50:10.507531 update_engine[1625]: I20260120 01:50:10.501222 1625 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 20 01:50:10.521213 update_engine[1625]: I20260120 01:50:10.521133 1625 update_attempter.cc:643] Scheduling an action processor start. Jan 20 01:50:10.521565 update_engine[1625]: I20260120 01:50:10.521503 1625 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 20 01:50:10.522103 update_engine[1625]: I20260120 01:50:10.522076 1625 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 20 01:50:10.529837 update_engine[1625]: I20260120 01:50:10.529764 1625 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 20 01:50:10.530018 update_engine[1625]: I20260120 01:50:10.529990 1625 omaha_request_action.cc:272] Request: Jan 20 01:50:10.530018 update_engine[1625]: Jan 20 01:50:10.530018 update_engine[1625]: Jan 20 01:50:10.530018 update_engine[1625]: Jan 20 01:50:10.530018 update_engine[1625]: Jan 20 01:50:10.530018 update_engine[1625]: Jan 20 01:50:10.530018 update_engine[1625]: Jan 20 01:50:10.530018 update_engine[1625]: Jan 20 01:50:10.530018 update_engine[1625]: Jan 20 01:50:10.537441 update_engine[1625]: I20260120 01:50:10.537380 1625 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 01:50:10.537000 audit[3366]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3366 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:50:10.537000 audit[3366]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd160d9a80 a2=0 a3=7ffd160d9a6c items=0 ppid=3237 pid=3366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:10.537000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 20 01:50:10.629161 update_engine[1625]: I20260120 01:50:10.602142 1625 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 01:50:10.631486 update_engine[1625]: I20260120 01:50:10.629989 1625 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 01:50:10.661486 update_engine[1625]: E20260120 01:50:10.660851 1625 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 01:50:10.661486 update_engine[1625]: I20260120 01:50:10.661404 1625 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 20 01:50:10.672862 locksmithd[1694]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 20 01:50:10.696000 audit[3368]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3368 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:50:10.696000 audit[3368]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffca005e120 a2=0 a3=7ffca005e10c items=0 ppid=3237 pid=3368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:10.696000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 20 01:50:10.826000 audit[3371]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3371 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:50:10.826000 audit[3371]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdc3e449a0 a2=0 a3=7ffdc3e4498c items=0 ppid=3237 pid=3371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:10.826000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 20 01:50:10.849000 audit[3372]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3372 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:50:10.849000 audit[3372]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffde6a04150 a2=0 a3=7ffde6a0413c items=0 ppid=3237 pid=3372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:10.849000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 20 01:50:10.942000 audit[3374]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3374 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:50:10.942000 audit[3374]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcfae7a8b0 a2=0 a3=7ffcfae7a89c items=0 ppid=3237 pid=3374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:10.942000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 20 01:50:11.038000 audit[3375]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3375 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:50:11.038000 audit[3375]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc2a7c3420 a2=0 a3=7ffc2a7c340c items=0 ppid=3237 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:11.038000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 20 01:50:11.155000 audit[3377]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3377 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:50:11.155000 audit[3377]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff0edc4620 a2=0 a3=7fff0edc460c items=0 ppid=3237 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:11.155000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 20 01:50:11.267000 audit[3380]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3380 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:50:11.267000 audit[3380]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffeef48d820 a2=0 a3=7ffeef48d80c items=0 ppid=3237 pid=3380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:11.267000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 20 01:50:11.737000 audit[3383]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3383 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:50:11.737000 audit[3383]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffca7814d40 a2=0 a3=7ffca7814d2c items=0 ppid=3237 pid=3383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:11.737000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 20 01:50:11.874234 kernel: kauditd_printk_skb: 72 callbacks suppressed Jan 20 01:50:11.874596 kernel: audit: type=1325 audit(1768873811.849:493): table=nat:74 family=2 entries=1 op=nft_register_chain pid=3384 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:50:11.849000 audit[3384]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3384 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:50:11.912410 kernel: audit: type=1300 audit(1768873811.849:493): arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe9720ef10 a2=0 a3=7ffe9720eefc items=0 ppid=3237 pid=3384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:11.849000 audit[3384]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe9720ef10 a2=0 a3=7ffe9720eefc items=0 ppid=3237 pid=3384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:11.849000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 20 01:50:12.002427 kernel: audit: type=1327 audit(1768873811.849:493): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 20 01:50:12.042000 audit[3387]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3387 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:50:12.042000 audit[3387]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd107247b0 a2=0 a3=7ffd1072479c items=0 ppid=3237 pid=3387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:12.205562 kernel: audit: type=1325 audit(1768873812.042:494): table=nat:75 family=2 entries=1 op=nft_register_rule pid=3387 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:50:12.205788 kernel: audit: type=1300 audit(1768873812.042:494): arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd107247b0 a2=0 a3=7ffd1072479c items=0 ppid=3237 pid=3387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:12.205938 kernel: audit: type=1327 audit(1768873812.042:494): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 01:50:12.042000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 01:50:12.273048 kernel: audit: type=1325 audit(1768873812.182:495): table=nat:76 family=2 entries=1 op=nft_register_rule pid=3390 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:50:12.182000 audit[3390]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3390 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:50:12.182000 audit[3390]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd97852830 a2=0 a3=7ffd9785281c items=0 ppid=3237 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:12.462653 kernel: audit: type=1300 audit(1768873812.182:495): arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd97852830 a2=0 a3=7ffd9785281c items=0 ppid=3237 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:12.463378 kernel: audit: type=1327 audit(1768873812.182:495): proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 01:50:12.182000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 01:50:12.531178 kernel: audit: type=1325 audit(1768873812.198:496): table=nat:77 family=2 entries=1 op=nft_register_chain pid=3391 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:50:12.198000 audit[3391]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3391 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:50:12.198000 audit[3391]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc433d7c50 a2=0 a3=7ffc433d7c3c items=0 ppid=3237 pid=3391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:12.198000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 20 01:50:12.413000 audit[3393]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3393 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:50:12.413000 audit[3393]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc99886780 a2=0 a3=7ffc9988676c items=0 ppid=3237 pid=3393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:12.413000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 20 01:50:13.113000 audit[3399]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3399 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:50:13.113000 audit[3399]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff691c1fc0 a2=0 a3=7fff691c1fac items=0 ppid=3237 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:13.113000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:50:13.256000 audit[3399]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3399 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:50:13.256000 audit[3399]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7fff691c1fc0 a2=0 a3=7fff691c1fac items=0 ppid=3237 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:13.256000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:50:13.283000 audit[3405]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3405 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:50:13.283000 audit[3405]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff456903d0 a2=0 a3=7fff456903bc items=0 ppid=3237 pid=3405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:13.283000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 20 01:50:13.325000 audit[3407]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3407 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:50:13.325000 audit[3407]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff387607a0 a2=0 a3=7fff3876078c items=0 ppid=3237 pid=3407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:13.325000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 20 01:50:13.403000 audit[3410]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3410 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:50:13.403000 audit[3410]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffcdbd92670 a2=0 a3=7ffcdbd9265c items=0 ppid=3237 pid=3410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:13.403000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 20 01:50:13.540000 audit[3411]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3411 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:50:13.540000 audit[3411]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff0a8f76b0 a2=0 a3=7fff0a8f769c items=0 ppid=3237 pid=3411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:13.540000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 20 01:50:13.651000 audit[3413]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3413 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:50:13.651000 audit[3413]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe96ae5040 a2=0 a3=7ffe96ae502c items=0 ppid=3237 pid=3413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:13.651000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 20 01:50:13.663000 audit[3414]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3414 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:50:13.663000 audit[3414]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd85ca2fc0 a2=0 a3=7ffd85ca2fac items=0 ppid=3237 pid=3414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:13.663000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 20 01:50:13.707000 audit[3416]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3416 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:50:13.707000 audit[3416]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffddb71aeb0 a2=0 a3=7ffddb71ae9c items=0 ppid=3237 pid=3416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:13.707000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 20 01:50:13.808000 audit[3419]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3419 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:50:13.808000 audit[3419]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffffd7846d0 a2=0 a3=7ffffd7846bc items=0 ppid=3237 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:13.808000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 20 01:50:13.841000 audit[3420]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3420 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:50:13.841000 audit[3420]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff41229ad0 a2=0 a3=7fff41229abc items=0 ppid=3237 pid=3420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:13.841000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 20 01:50:14.133000 audit[3422]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3422 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:50:14.133000 audit[3422]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd7f5082b0 a2=0 a3=7ffd7f50829c items=0 ppid=3237 pid=3422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:14.133000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 20 01:50:14.190000 audit[3423]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3423 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:50:14.190000 audit[3423]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe3eab8740 a2=0 a3=7ffe3eab872c items=0 ppid=3237 pid=3423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:14.190000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 20 01:50:14.257000 audit[3425]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3425 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:50:14.257000 audit[3425]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd7784b980 a2=0 a3=7ffd7784b96c items=0 ppid=3237 pid=3425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:14.257000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 20 01:50:14.437000 audit[3432]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3432 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:50:14.437000 audit[3432]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffffb05a520 a2=0 a3=7ffffb05a50c items=0 ppid=3237 pid=3432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:14.437000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 20 01:50:14.807000 audit[3435]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3435 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:50:14.807000 audit[3435]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff13f4d230 a2=0 a3=7fff13f4d21c items=0 ppid=3237 pid=3435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:14.807000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 20 01:50:14.852000 audit[3436]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3436 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:50:14.852000 audit[3436]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe46ad6f60 a2=0 a3=7ffe46ad6f4c items=0 ppid=3237 pid=3436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:14.852000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 20 01:50:14.944000 audit[3438]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3438 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:50:14.944000 audit[3438]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fffecf7d5d0 a2=0 a3=7fffecf7d5bc items=0 ppid=3237 pid=3438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:14.944000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 01:50:15.107000 audit[3441]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3441 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:50:15.107000 audit[3441]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcc69ab700 a2=0 a3=7ffcc69ab6ec items=0 ppid=3237 pid=3441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:15.107000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 01:50:15.140000 audit[3442]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3442 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:50:15.140000 audit[3442]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff3a1cd750 a2=0 a3=7fff3a1cd73c items=0 ppid=3237 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:15.140000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 20 01:50:15.226000 audit[3444]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3444 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:50:15.226000 audit[3444]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffdaae10aa0 a2=0 a3=7ffdaae10a8c items=0 ppid=3237 pid=3444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:15.226000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 20 01:50:15.243000 audit[3445]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3445 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:50:15.243000 audit[3445]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffec942a710 a2=0 a3=7ffec942a6fc items=0 ppid=3237 pid=3445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:15.243000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 20 01:50:15.289000 audit[3447]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3447 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:50:15.289000 audit[3447]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe754e3db0 a2=0 a3=7ffe754e3d9c items=0 ppid=3237 pid=3447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:15.289000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 01:50:15.356000 audit[3450]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3450 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:50:15.356000 audit[3450]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe0af7d6c0 a2=0 a3=7ffe0af7d6ac items=0 ppid=3237 pid=3450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:15.356000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 01:50:15.384497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3981634118.mount: Deactivated successfully. Jan 20 01:50:15.468000 audit[3452]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3452 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 20 01:50:15.468000 audit[3452]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffc3cf61240 a2=0 a3=7ffc3cf6122c items=0 ppid=3237 pid=3452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:15.468000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:50:15.468000 audit[3452]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3452 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 20 01:50:15.468000 audit[3452]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffc3cf61240 a2=0 a3=7ffc3cf6122c items=0 ppid=3237 pid=3452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:15.468000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:50:21.226830 update_engine[1625]: I20260120 01:50:21.219904 1625 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 01:50:21.226830 update_engine[1625]: I20260120 01:50:21.220202 1625 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 01:50:21.226830 update_engine[1625]: I20260120 01:50:21.226193 1625 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 01:50:21.288008 update_engine[1625]: E20260120 01:50:21.281284 1625 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 01:50:21.288008 update_engine[1625]: I20260120 01:50:21.281419 1625 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 20 01:50:28.693819 containerd[1643]: time="2026-01-20T01:50:28.692850522Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:50:28.722311 containerd[1643]: time="2026-01-20T01:50:28.720627727Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25052948" Jan 20 01:50:28.731783 containerd[1643]: time="2026-01-20T01:50:28.731392806Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:50:28.796308 containerd[1643]: time="2026-01-20T01:50:28.791954350Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:50:28.796308 containerd[1643]: time="2026-01-20T01:50:28.793148249Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 19.939600157s" Jan 20 01:50:28.796308 containerd[1643]: time="2026-01-20T01:50:28.793283717Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 20 01:50:28.911595 containerd[1643]: time="2026-01-20T01:50:28.911537147Z" level=info msg="CreateContainer within sandbox \"eb8e35fa7303fe39aded908b411e4f668a84c2383ceaac749c50b482a095aeaa\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 20 01:50:29.043625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1537463054.mount: Deactivated successfully. Jan 20 01:50:29.079421 containerd[1643]: time="2026-01-20T01:50:29.075350952Z" level=info msg="Container 0f3d0cdf67286344235ecb391494b9eee9bbacbe2aff2443da583a6b1066b146: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:50:29.119276 containerd[1643]: time="2026-01-20T01:50:29.118525153Z" level=info msg="CreateContainer within sandbox \"eb8e35fa7303fe39aded908b411e4f668a84c2383ceaac749c50b482a095aeaa\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0f3d0cdf67286344235ecb391494b9eee9bbacbe2aff2443da583a6b1066b146\"" Jan 20 01:50:29.123295 containerd[1643]: time="2026-01-20T01:50:29.120369504Z" level=info msg="StartContainer for \"0f3d0cdf67286344235ecb391494b9eee9bbacbe2aff2443da583a6b1066b146\"" Jan 20 01:50:29.123295 containerd[1643]: time="2026-01-20T01:50:29.122924924Z" level=info msg="connecting to shim 0f3d0cdf67286344235ecb391494b9eee9bbacbe2aff2443da583a6b1066b146" address="unix:///run/containerd/s/f46b98591f0a7eb530e13ff451ae5c318baefd5b9b3328df028a2ef6d430f64a" protocol=ttrpc version=3 Jan 20 01:50:29.490849 systemd[1]: Started cri-containerd-0f3d0cdf67286344235ecb391494b9eee9bbacbe2aff2443da583a6b1066b146.scope - libcontainer container 0f3d0cdf67286344235ecb391494b9eee9bbacbe2aff2443da583a6b1066b146. Jan 20 01:50:29.779000 audit: BPF prog-id=144 op=LOAD Jan 20 01:50:29.799567 kernel: kauditd_printk_skb: 83 callbacks suppressed Jan 20 01:50:29.799956 kernel: audit: type=1334 audit(1768873829.779:524): prog-id=144 op=LOAD Jan 20 01:50:29.834972 kernel: audit: type=1334 audit(1768873829.793:525): prog-id=145 op=LOAD Jan 20 01:50:29.837355 kernel: audit: type=1300 audit(1768873829.793:525): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3275 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:29.793000 audit: BPF prog-id=145 op=LOAD Jan 20 01:50:29.793000 audit[3465]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3275 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:29.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066336430636466363732383633343432333565636233393134393462 Jan 20 01:50:29.793000 audit: BPF prog-id=145 op=UNLOAD Jan 20 01:50:30.100603 kernel: audit: type=1327 audit(1768873829.793:525): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066336430636466363732383633343432333565636233393134393462 Jan 20 01:50:30.100837 kernel: audit: type=1334 audit(1768873829.793:526): prog-id=145 op=UNLOAD Jan 20 01:50:30.100896 kernel: audit: type=1300 audit(1768873829.793:526): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3275 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:29.793000 audit[3465]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3275 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:29.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066336430636466363732383633343432333565636233393134393462 Jan 20 01:50:30.307793 kernel: audit: type=1327 audit(1768873829.793:526): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066336430636466363732383633343432333565636233393134393462 Jan 20 01:50:29.793000 audit: BPF prog-id=146 op=LOAD Jan 20 01:50:30.331011 kernel: audit: type=1334 audit(1768873829.793:527): prog-id=146 op=LOAD Jan 20 01:50:29.793000 audit[3465]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3275 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:30.414252 kernel: audit: type=1300 audit(1768873829.793:527): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3275 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:30.414408 kernel: audit: type=1327 audit(1768873829.793:527): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066336430636466363732383633343432333565636233393134393462 Jan 20 01:50:29.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066336430636466363732383633343432333565636233393134393462 Jan 20 01:50:29.793000 audit: BPF prog-id=147 op=LOAD Jan 20 01:50:29.793000 audit[3465]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3275 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:29.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066336430636466363732383633343432333565636233393134393462 Jan 20 01:50:29.793000 audit: BPF prog-id=147 op=UNLOAD Jan 20 01:50:29.793000 audit[3465]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3275 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:29.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066336430636466363732383633343432333565636233393134393462 Jan 20 01:50:29.798000 audit: BPF prog-id=146 op=UNLOAD Jan 20 01:50:29.798000 audit[3465]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3275 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:29.798000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066336430636466363732383633343432333565636233393134393462 Jan 20 01:50:29.798000 audit: BPF prog-id=148 op=LOAD Jan 20 01:50:29.798000 audit[3465]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3275 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:50:29.798000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066336430636466363732383633343432333565636233393134393462 Jan 20 01:50:30.794390 containerd[1643]: time="2026-01-20T01:50:30.781046072Z" level=info msg="StartContainer for \"0f3d0cdf67286344235ecb391494b9eee9bbacbe2aff2443da583a6b1066b146\" returns successfully" Jan 20 01:50:31.224574 update_engine[1625]: I20260120 01:50:31.211894 1625 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 01:50:31.224574 update_engine[1625]: I20260120 01:50:31.226148 1625 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 01:50:31.296613 update_engine[1625]: I20260120 01:50:31.239620 1625 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 01:50:31.303929 update_engine[1625]: E20260120 01:50:31.300606 1625 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 01:50:31.303929 update_engine[1625]: I20260120 01:50:31.301307 1625 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 20 01:50:41.220524 update_engine[1625]: I20260120 01:50:41.215924 1625 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 01:50:41.220524 update_engine[1625]: I20260120 01:50:41.216099 1625 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 01:50:41.220524 update_engine[1625]: I20260120 01:50:41.218568 1625 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 01:50:41.265522 update_engine[1625]: E20260120 01:50:41.261667 1625 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 01:50:41.265522 update_engine[1625]: I20260120 01:50:41.261965 1625 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 20 01:50:41.265522 update_engine[1625]: I20260120 01:50:41.261984 1625 omaha_request_action.cc:617] Omaha request response: Jan 20 01:50:41.265522 update_engine[1625]: E20260120 01:50:41.262214 1625 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 20 01:50:41.265522 update_engine[1625]: I20260120 01:50:41.262307 1625 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 20 01:50:41.265522 update_engine[1625]: I20260120 01:50:41.262374 1625 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 01:50:41.265522 update_engine[1625]: I20260120 01:50:41.262389 1625 update_attempter.cc:306] Processing Done. Jan 20 01:50:41.265522 update_engine[1625]: E20260120 01:50:41.262485 1625 update_attempter.cc:619] Update failed. Jan 20 01:50:41.265522 update_engine[1625]: I20260120 01:50:41.262571 1625 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 20 01:50:41.265522 update_engine[1625]: I20260120 01:50:41.262585 1625 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 20 01:50:41.265522 update_engine[1625]: I20260120 01:50:41.262596 1625 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 20 01:50:41.265522 update_engine[1625]: I20260120 01:50:41.265484 1625 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 20 01:50:41.265522 update_engine[1625]: I20260120 01:50:41.265532 1625 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 20 01:50:41.265522 update_engine[1625]: I20260120 01:50:41.265544 1625 omaha_request_action.cc:272] Request: Jan 20 01:50:41.265522 update_engine[1625]: Jan 20 01:50:41.265522 update_engine[1625]: Jan 20 01:50:41.266273 update_engine[1625]: Jan 20 01:50:41.266273 update_engine[1625]: Jan 20 01:50:41.266273 update_engine[1625]: Jan 20 01:50:41.266273 update_engine[1625]: Jan 20 01:50:41.266273 update_engine[1625]: I20260120 01:50:41.265556 1625 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 01:50:41.266273 update_engine[1625]: I20260120 01:50:41.265589 1625 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 01:50:41.266273 update_engine[1625]: I20260120 01:50:41.266206 1625 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 01:50:41.275980 locksmithd[1694]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 20 01:50:41.313662 update_engine[1625]: E20260120 01:50:41.308807 1625 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 01:50:41.313662 update_engine[1625]: I20260120 01:50:41.309007 1625 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 20 01:50:41.313662 update_engine[1625]: I20260120 01:50:41.309029 1625 omaha_request_action.cc:617] Omaha request response: Jan 20 01:50:41.313662 update_engine[1625]: I20260120 01:50:41.309047 1625 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 01:50:41.313662 update_engine[1625]: I20260120 01:50:41.309059 1625 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 01:50:41.313662 update_engine[1625]: I20260120 01:50:41.309068 1625 update_attempter.cc:306] Processing Done. Jan 20 01:50:41.313662 update_engine[1625]: I20260120 01:50:41.309082 1625 update_attempter.cc:310] Error event sent. Jan 20 01:50:41.313662 update_engine[1625]: I20260120 01:50:41.309144 1625 update_check_scheduler.cc:74] Next update check in 42m39s Jan 20 01:50:41.314229 locksmithd[1694]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 20 01:50:59.964534 sudo[1862]: pam_unix(sudo:session): session closed for user root Jan 20 01:50:59.969000 audit[1862]: USER_END pid=1862 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 01:50:59.990192 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 20 01:50:59.990368 kernel: audit: type=1106 audit(1768873859.969:532): pid=1862 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 01:51:00.005306 sshd[1861]: Connection closed by 10.0.0.1 port 58398 Jan 20 01:51:00.018123 sshd-session[1858]: pam_unix(sshd:session): session closed for user core Jan 20 01:50:59.969000 audit[1862]: CRED_DISP pid=1862 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 01:51:00.077922 systemd[1]: sshd@8-10.0.0.44:22-10.0.0.1:58398.service: Deactivated successfully. Jan 20 01:51:00.102455 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 01:51:00.106417 systemd[1]: session-9.scope: Consumed 31.234s CPU time, 220.5M memory peak. Jan 20 01:51:00.119869 kernel: audit: type=1104 audit(1768873859.969:533): pid=1862 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 01:51:00.120017 kernel: audit: type=1106 audit(1768873860.051:534): pid=1858 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:51:00.051000 audit[1858]: USER_END pid=1858 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:51:00.141965 systemd-logind[1623]: Session 9 logged out. Waiting for processes to exit. Jan 20 01:51:00.163662 systemd-logind[1623]: Removed session 9. Jan 20 01:51:00.051000 audit[1858]: CRED_DISP pid=1858 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:51:00.255330 kernel: audit: type=1104 audit(1768873860.051:535): pid=1858 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:51:00.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.44:22-10.0.0.1:58398 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:51:00.455590 kernel: audit: type=1131 audit(1768873860.077:536): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.44:22-10.0.0.1:58398 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:51:00.853574 kubelet[3123]: E0120 01:51:00.850385 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:51:03.967477 kubelet[3123]: E0120 01:51:03.965940 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:51:16.605022 kubelet[3123]: E0120 01:51:16.596458 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:51:18.333613 kubelet[3123]: E0120 01:51:18.316776 3123 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.851s" Jan 20 01:51:19.759105 kubelet[3123]: E0120 01:51:19.758861 3123 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.442s" Jan 20 01:51:23.825893 kubelet[3123]: E0120 01:51:23.814946 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:51:25.387000 audit[3568]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3568 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:51:25.470094 kernel: audit: type=1325 audit(1768873885.387:537): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3568 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:51:25.387000 audit[3568]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd2a8003e0 a2=0 a3=7ffd2a8003cc items=0 ppid=3237 pid=3568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:51:25.648209 kernel: audit: type=1300 audit(1768873885.387:537): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd2a8003e0 a2=0 a3=7ffd2a8003cc items=0 ppid=3237 pid=3568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:51:25.387000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:51:25.625000 audit[3568]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3568 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:51:25.804443 kernel: audit: type=1327 audit(1768873885.387:537): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:51:25.804662 kernel: audit: type=1325 audit(1768873885.625:538): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3568 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:51:25.804829 kernel: audit: type=1300 audit(1768873885.625:538): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd2a8003e0 a2=0 a3=0 items=0 ppid=3237 pid=3568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:51:25.625000 audit[3568]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd2a8003e0 a2=0 a3=0 items=0 ppid=3237 pid=3568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:51:25.961909 kernel: audit: type=1327 audit(1768873885.625:538): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:51:25.625000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:51:26.203000 audit[3570]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3570 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:51:26.298086 kernel: audit: type=1325 audit(1768873886.203:539): table=filter:107 family=2 entries=16 op=nft_register_rule pid=3570 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:51:26.298231 kernel: audit: type=1300 audit(1768873886.203:539): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe0443b380 a2=0 a3=7ffe0443b36c items=0 ppid=3237 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:51:26.203000 audit[3570]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe0443b380 a2=0 a3=7ffe0443b36c items=0 ppid=3237 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:51:26.203000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:51:26.515321 kernel: audit: type=1327 audit(1768873886.203:539): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:51:26.515480 kernel: audit: type=1325 audit(1768873886.452:540): table=nat:108 family=2 entries=12 op=nft_register_rule pid=3570 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:51:26.452000 audit[3570]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3570 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:51:26.452000 audit[3570]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe0443b380 a2=0 a3=0 items=0 ppid=3237 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:51:26.452000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:51:46.937874 kubelet[3123]: E0120 01:51:46.937799 3123 kubelet_node_status.go:460] "Node not becoming ready in time after startup" Jan 20 01:51:48.447649 kubelet[3123]: E0120 01:51:48.446846 3123 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:51:53.451033 kubelet[3123]: E0120 01:51:53.450352 3123 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:51:56.706495 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 20 01:51:56.706847 kernel: audit: type=1325 audit(1768873916.635:541): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3576 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:51:56.635000 audit[3576]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3576 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:51:56.827604 kernel: audit: type=1300 audit(1768873916.635:541): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc7c63a800 a2=0 a3=7ffc7c63a7ec items=0 ppid=3237 pid=3576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:51:56.635000 audit[3576]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc7c63a800 a2=0 a3=7ffc7c63a7ec items=0 ppid=3237 pid=3576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:51:56.635000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:51:56.914163 kernel: audit: type=1327 audit(1768873916.635:541): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:51:56.925000 audit[3576]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3576 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:51:56.925000 audit[3576]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc7c63a800 a2=0 a3=0 items=0 ppid=3237 pid=3576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:51:57.197494 kernel: audit: type=1325 audit(1768873916.925:542): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3576 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:51:57.197650 kernel: audit: type=1300 audit(1768873916.925:542): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc7c63a800 a2=0 a3=0 items=0 ppid=3237 pid=3576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:51:56.925000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:51:57.235526 kernel: audit: type=1327 audit(1768873916.925:542): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:51:57.555000 audit[3578]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3578 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:51:57.555000 audit[3578]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffa1b453c0 a2=0 a3=7fffa1b453ac items=0 ppid=3237 pid=3578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:51:57.770151 kernel: audit: type=1325 audit(1768873917.555:543): table=filter:111 family=2 entries=19 op=nft_register_rule pid=3578 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:51:57.770577 kernel: audit: type=1300 audit(1768873917.555:543): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffa1b453c0 a2=0 a3=7fffa1b453ac items=0 ppid=3237 pid=3578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:51:57.770639 kernel: audit: type=1327 audit(1768873917.555:543): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:51:57.555000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:51:57.931000 audit[3578]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3578 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:51:57.995731 kernel: audit: type=1325 audit(1768873917.931:544): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3578 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:51:57.931000 audit[3578]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffa1b453c0 a2=0 a3=0 items=0 ppid=3237 pid=3578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:51:57.931000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:51:58.458796 kubelet[3123]: E0120 01:51:58.454668 3123 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:52:03.507122 kubelet[3123]: E0120 01:52:03.507053 3123 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:52:08.535126 kubelet[3123]: E0120 01:52:08.534991 3123 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:52:12.430000 audit[3582]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3582 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:52:12.468402 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 20 01:52:12.468627 kernel: audit: type=1325 audit(1768873932.430:545): table=filter:113 family=2 entries=21 op=nft_register_rule pid=3582 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:52:12.430000 audit[3582]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffda8df7130 a2=0 a3=7ffda8df711c items=0 ppid=3237 pid=3582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:12.552656 kernel: audit: type=1300 audit(1768873932.430:545): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffda8df7130 a2=0 a3=7ffda8df711c items=0 ppid=3237 pid=3582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:12.552896 kernel: audit: type=1327 audit(1768873932.430:545): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:52:12.430000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:52:12.600866 kernel: audit: type=1325 audit(1768873932.509:546): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3582 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:52:12.509000 audit[3582]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3582 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:52:12.509000 audit[3582]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffda8df7130 a2=0 a3=0 items=0 ppid=3237 pid=3582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:12.690500 kernel: audit: type=1300 audit(1768873932.509:546): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffda8df7130 a2=0 a3=0 items=0 ppid=3237 pid=3582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:12.690768 kernel: audit: type=1327 audit(1768873932.509:546): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:52:12.509000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:52:13.025000 audit[3584]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=3584 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:52:13.057203 kubelet[3123]: I0120 01:52:13.042269 3123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-dhhdq" podStartSLOduration=110.063789406 podStartE2EDuration="2m10.042245438s" podCreationTimestamp="2026-01-20 01:50:03 +0000 UTC" firstStartedPulling="2026-01-20 01:50:08.845617968 +0000 UTC m=+23.294066673" lastFinishedPulling="2026-01-20 01:50:28.824074001 +0000 UTC m=+43.272522705" observedRunningTime="2026-01-20 01:50:31.557892432 +0000 UTC m=+46.006341137" watchObservedRunningTime="2026-01-20 01:52:13.042245438 +0000 UTC m=+147.490694143" Jan 20 01:52:13.057979 kernel: audit: type=1325 audit(1768873933.025:547): table=filter:115 family=2 entries=22 op=nft_register_rule pid=3584 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:52:13.071890 kernel: audit: type=1300 audit(1768873933.025:547): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff1541d440 a2=0 a3=7fff1541d42c items=0 ppid=3237 pid=3584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:13.025000 audit[3584]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff1541d440 a2=0 a3=7fff1541d42c items=0 ppid=3237 pid=3584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:13.166731 systemd[1]: Created slice kubepods-besteffort-pod39ca85b3_5f40_48f7_a575_5e7142bfafdf.slice - libcontainer container kubepods-besteffort-pod39ca85b3_5f40_48f7_a575_5e7142bfafdf.slice. Jan 20 01:52:13.025000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:52:13.243758 kernel: audit: type=1327 audit(1768873933.025:547): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:52:13.243915 kernel: audit: type=1325 audit(1768873933.070:548): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3584 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:52:13.070000 audit[3584]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3584 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:52:13.252859 kubelet[3123]: I0120 01:52:13.239643 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/39ca85b3-5f40-48f7-a575-5e7142bfafdf-typha-certs\") pod \"calico-typha-76ffb8b7db-l9ggx\" (UID: \"39ca85b3-5f40-48f7-a575-5e7142bfafdf\") " pod="calico-system/calico-typha-76ffb8b7db-l9ggx" Jan 20 01:52:13.252859 kubelet[3123]: I0120 01:52:13.240032 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39ca85b3-5f40-48f7-a575-5e7142bfafdf-tigera-ca-bundle\") pod \"calico-typha-76ffb8b7db-l9ggx\" (UID: \"39ca85b3-5f40-48f7-a575-5e7142bfafdf\") " pod="calico-system/calico-typha-76ffb8b7db-l9ggx" Jan 20 01:52:13.070000 audit[3584]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff1541d440 a2=0 a3=0 items=0 ppid=3237 pid=3584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:13.070000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:52:13.309785 kubelet[3123]: I0120 01:52:13.308048 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v67p4\" (UniqueName: \"kubernetes.io/projected/39ca85b3-5f40-48f7-a575-5e7142bfafdf-kube-api-access-v67p4\") pod \"calico-typha-76ffb8b7db-l9ggx\" (UID: \"39ca85b3-5f40-48f7-a575-5e7142bfafdf\") " pod="calico-system/calico-typha-76ffb8b7db-l9ggx" Jan 20 01:52:13.607361 kubelet[3123]: E0120 01:52:13.591346 3123 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:52:13.742000 audit[3588]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3588 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:52:13.742000 audit[3588]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fffb696b520 a2=0 a3=7fffb696b50c items=0 ppid=3237 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:13.742000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:52:13.763000 audit[3588]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3588 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:52:13.763000 audit[3588]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffb696b520 a2=0 a3=0 items=0 ppid=3237 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:13.763000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:52:13.872213 kubelet[3123]: E0120 01:52:13.871817 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:13.955400 containerd[1643]: time="2026-01-20T01:52:13.955195612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76ffb8b7db-l9ggx,Uid:39ca85b3-5f40-48f7-a575-5e7142bfafdf,Namespace:calico-system,Attempt:0,}" Jan 20 01:52:14.329851 systemd[1]: Created slice kubepods-besteffort-pod1346b44e_5bca_45f4_a2e0_4c89e7aca64f.slice - libcontainer container kubepods-besteffort-pod1346b44e_5bca_45f4_a2e0_4c89e7aca64f.slice. Jan 20 01:52:14.370811 kubelet[3123]: I0120 01:52:14.370096 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhlg6\" (UniqueName: \"kubernetes.io/projected/1346b44e-5bca-45f4-a2e0-4c89e7aca64f-kube-api-access-qhlg6\") pod \"calico-node-s9z5p\" (UID: \"1346b44e-5bca-45f4-a2e0-4c89e7aca64f\") " pod="calico-system/calico-node-s9z5p" Jan 20 01:52:14.370811 kubelet[3123]: I0120 01:52:14.370242 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1346b44e-5bca-45f4-a2e0-4c89e7aca64f-lib-modules\") pod \"calico-node-s9z5p\" (UID: \"1346b44e-5bca-45f4-a2e0-4c89e7aca64f\") " pod="calico-system/calico-node-s9z5p" Jan 20 01:52:14.370811 kubelet[3123]: I0120 01:52:14.370290 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1346b44e-5bca-45f4-a2e0-4c89e7aca64f-cni-log-dir\") pod \"calico-node-s9z5p\" (UID: \"1346b44e-5bca-45f4-a2e0-4c89e7aca64f\") " pod="calico-system/calico-node-s9z5p" Jan 20 01:52:14.370811 kubelet[3123]: I0120 01:52:14.370320 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1346b44e-5bca-45f4-a2e0-4c89e7aca64f-node-certs\") pod \"calico-node-s9z5p\" (UID: \"1346b44e-5bca-45f4-a2e0-4c89e7aca64f\") " pod="calico-system/calico-node-s9z5p" Jan 20 01:52:14.370811 kubelet[3123]: I0120 01:52:14.370343 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1346b44e-5bca-45f4-a2e0-4c89e7aca64f-tigera-ca-bundle\") pod \"calico-node-s9z5p\" (UID: \"1346b44e-5bca-45f4-a2e0-4c89e7aca64f\") " pod="calico-system/calico-node-s9z5p" Jan 20 01:52:14.373192 kubelet[3123]: I0120 01:52:14.370374 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1346b44e-5bca-45f4-a2e0-4c89e7aca64f-cni-net-dir\") pod \"calico-node-s9z5p\" (UID: \"1346b44e-5bca-45f4-a2e0-4c89e7aca64f\") " pod="calico-system/calico-node-s9z5p" Jan 20 01:52:14.373192 kubelet[3123]: I0120 01:52:14.370404 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1346b44e-5bca-45f4-a2e0-4c89e7aca64f-var-run-calico\") pod \"calico-node-s9z5p\" (UID: \"1346b44e-5bca-45f4-a2e0-4c89e7aca64f\") " pod="calico-system/calico-node-s9z5p" Jan 20 01:52:14.373192 kubelet[3123]: I0120 01:52:14.370438 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1346b44e-5bca-45f4-a2e0-4c89e7aca64f-xtables-lock\") pod \"calico-node-s9z5p\" (UID: \"1346b44e-5bca-45f4-a2e0-4c89e7aca64f\") " pod="calico-system/calico-node-s9z5p" Jan 20 01:52:14.373192 kubelet[3123]: I0120 01:52:14.370470 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1346b44e-5bca-45f4-a2e0-4c89e7aca64f-flexvol-driver-host\") pod \"calico-node-s9z5p\" (UID: \"1346b44e-5bca-45f4-a2e0-4c89e7aca64f\") " pod="calico-system/calico-node-s9z5p" Jan 20 01:52:14.373192 kubelet[3123]: I0120 01:52:14.370501 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1346b44e-5bca-45f4-a2e0-4c89e7aca64f-policysync\") pod \"calico-node-s9z5p\" (UID: \"1346b44e-5bca-45f4-a2e0-4c89e7aca64f\") " pod="calico-system/calico-node-s9z5p" Jan 20 01:52:14.373453 kubelet[3123]: I0120 01:52:14.370530 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1346b44e-5bca-45f4-a2e0-4c89e7aca64f-cni-bin-dir\") pod \"calico-node-s9z5p\" (UID: \"1346b44e-5bca-45f4-a2e0-4c89e7aca64f\") " pod="calico-system/calico-node-s9z5p" Jan 20 01:52:14.373453 kubelet[3123]: I0120 01:52:14.370565 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1346b44e-5bca-45f4-a2e0-4c89e7aca64f-var-lib-calico\") pod \"calico-node-s9z5p\" (UID: \"1346b44e-5bca-45f4-a2e0-4c89e7aca64f\") " pod="calico-system/calico-node-s9z5p" Jan 20 01:52:14.431093 containerd[1643]: time="2026-01-20T01:52:14.430927054Z" level=info msg="connecting to shim f5c1e5bcfe8f842b6812b54428af21037ef0d11d08ca26babd0661209f171f99" address="unix:///run/containerd/s/0911ec49f7a6bc811a7a4cff8080ec2f9d3035d84454ae8e04aff4bdd78ace46" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:52:14.525894 kubelet[3123]: E0120 01:52:14.521849 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:14.525894 kubelet[3123]: W0120 01:52:14.521915 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:14.525894 kubelet[3123]: E0120 01:52:14.521958 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:14.556371 kubelet[3123]: E0120 01:52:14.556241 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:14.556371 kubelet[3123]: W0120 01:52:14.556280 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:14.556371 kubelet[3123]: E0120 01:52:14.556309 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.256818 kubelet[3123]: E0120 01:52:15.205348 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.256818 kubelet[3123]: W0120 01:52:15.205477 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.256818 kubelet[3123]: E0120 01:52:15.205882 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.266807 kubelet[3123]: E0120 01:52:15.261993 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:15.267009 containerd[1643]: time="2026-01-20T01:52:15.263890171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s9z5p,Uid:1346b44e-5bca-45f4-a2e0-4c89e7aca64f,Namespace:calico-system,Attempt:0,}" Jan 20 01:52:15.317117 kubelet[3123]: E0120 01:52:15.316141 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:52:15.351515 systemd[1]: Started cri-containerd-f5c1e5bcfe8f842b6812b54428af21037ef0d11d08ca26babd0661209f171f99.scope - libcontainer container f5c1e5bcfe8f842b6812b54428af21037ef0d11d08ca26babd0661209f171f99. Jan 20 01:52:15.383397 kubelet[3123]: E0120 01:52:15.382314 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.383397 kubelet[3123]: W0120 01:52:15.382369 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.383397 kubelet[3123]: E0120 01:52:15.382399 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.383397 kubelet[3123]: E0120 01:52:15.382741 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.383397 kubelet[3123]: W0120 01:52:15.382758 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.383397 kubelet[3123]: E0120 01:52:15.382781 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.383397 kubelet[3123]: E0120 01:52:15.383044 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.383397 kubelet[3123]: W0120 01:52:15.383056 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.383397 kubelet[3123]: E0120 01:52:15.383070 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.393123 kubelet[3123]: E0120 01:52:15.386047 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.393123 kubelet[3123]: W0120 01:52:15.386099 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.393123 kubelet[3123]: E0120 01:52:15.386122 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.393123 kubelet[3123]: E0120 01:52:15.386526 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.393123 kubelet[3123]: W0120 01:52:15.386541 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.393123 kubelet[3123]: E0120 01:52:15.386785 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.393123 kubelet[3123]: E0120 01:52:15.387109 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.393123 kubelet[3123]: W0120 01:52:15.387121 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.393123 kubelet[3123]: E0120 01:52:15.387135 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.393123 kubelet[3123]: E0120 01:52:15.387433 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.396931 kubelet[3123]: W0120 01:52:15.387445 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.396931 kubelet[3123]: E0120 01:52:15.387457 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.396931 kubelet[3123]: E0120 01:52:15.387766 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.396931 kubelet[3123]: W0120 01:52:15.387782 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.396931 kubelet[3123]: E0120 01:52:15.387797 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.396931 kubelet[3123]: E0120 01:52:15.388051 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.396931 kubelet[3123]: W0120 01:52:15.388060 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.396931 kubelet[3123]: E0120 01:52:15.388071 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.396931 kubelet[3123]: E0120 01:52:15.393390 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.396931 kubelet[3123]: W0120 01:52:15.393413 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.397405 kubelet[3123]: E0120 01:52:15.393437 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.397405 kubelet[3123]: E0120 01:52:15.393856 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.397405 kubelet[3123]: W0120 01:52:15.393876 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.397405 kubelet[3123]: E0120 01:52:15.393895 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.397405 kubelet[3123]: E0120 01:52:15.394237 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.397405 kubelet[3123]: W0120 01:52:15.394255 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.397405 kubelet[3123]: E0120 01:52:15.394275 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.397405 kubelet[3123]: E0120 01:52:15.394586 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.397405 kubelet[3123]: W0120 01:52:15.394598 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.397405 kubelet[3123]: E0120 01:52:15.394613 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.397884 kubelet[3123]: E0120 01:52:15.394965 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.397884 kubelet[3123]: W0120 01:52:15.394981 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.397884 kubelet[3123]: E0120 01:52:15.394995 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.397884 kubelet[3123]: E0120 01:52:15.395311 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.397884 kubelet[3123]: W0120 01:52:15.395324 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.397884 kubelet[3123]: E0120 01:52:15.395338 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.397884 kubelet[3123]: E0120 01:52:15.395585 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.397884 kubelet[3123]: W0120 01:52:15.395598 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.397884 kubelet[3123]: E0120 01:52:15.395610 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.419662 kubelet[3123]: E0120 01:52:15.401295 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.419662 kubelet[3123]: W0120 01:52:15.401317 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.419662 kubelet[3123]: E0120 01:52:15.401340 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.419662 kubelet[3123]: E0120 01:52:15.401595 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.419662 kubelet[3123]: W0120 01:52:15.401608 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.419662 kubelet[3123]: E0120 01:52:15.401622 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.419662 kubelet[3123]: E0120 01:52:15.401942 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.419662 kubelet[3123]: W0120 01:52:15.401955 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.419662 kubelet[3123]: E0120 01:52:15.401970 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.419662 kubelet[3123]: E0120 01:52:15.402280 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.420114 kubelet[3123]: W0120 01:52:15.402295 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.420114 kubelet[3123]: E0120 01:52:15.402315 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.420114 kubelet[3123]: E0120 01:52:15.402780 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.420114 kubelet[3123]: W0120 01:52:15.402795 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.420114 kubelet[3123]: E0120 01:52:15.402813 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.420114 kubelet[3123]: I0120 01:52:15.402847 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/eeb09d5e-8a63-4fca-910b-ea49fa1ecf05-varrun\") pod \"csi-node-driver-x6f5h\" (UID: \"eeb09d5e-8a63-4fca-910b-ea49fa1ecf05\") " pod="calico-system/csi-node-driver-x6f5h" Jan 20 01:52:15.420114 kubelet[3123]: E0120 01:52:15.403143 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.420114 kubelet[3123]: W0120 01:52:15.403209 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.420114 kubelet[3123]: E0120 01:52:15.403226 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.420469 kubelet[3123]: I0120 01:52:15.403254 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb2hv\" (UniqueName: \"kubernetes.io/projected/eeb09d5e-8a63-4fca-910b-ea49fa1ecf05-kube-api-access-pb2hv\") pod \"csi-node-driver-x6f5h\" (UID: \"eeb09d5e-8a63-4fca-910b-ea49fa1ecf05\") " pod="calico-system/csi-node-driver-x6f5h" Jan 20 01:52:15.420469 kubelet[3123]: E0120 01:52:15.403567 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.420469 kubelet[3123]: W0120 01:52:15.403582 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.420469 kubelet[3123]: E0120 01:52:15.403596 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.420469 kubelet[3123]: I0120 01:52:15.403621 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/eeb09d5e-8a63-4fca-910b-ea49fa1ecf05-registration-dir\") pod \"csi-node-driver-x6f5h\" (UID: \"eeb09d5e-8a63-4fca-910b-ea49fa1ecf05\") " pod="calico-system/csi-node-driver-x6f5h" Jan 20 01:52:15.420469 kubelet[3123]: E0120 01:52:15.403980 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.420469 kubelet[3123]: W0120 01:52:15.403997 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.420469 kubelet[3123]: E0120 01:52:15.404012 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.421563 kubelet[3123]: I0120 01:52:15.404034 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eeb09d5e-8a63-4fca-910b-ea49fa1ecf05-kubelet-dir\") pod \"csi-node-driver-x6f5h\" (UID: \"eeb09d5e-8a63-4fca-910b-ea49fa1ecf05\") " pod="calico-system/csi-node-driver-x6f5h" Jan 20 01:52:15.421563 kubelet[3123]: E0120 01:52:15.418006 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.421563 kubelet[3123]: W0120 01:52:15.418051 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.421563 kubelet[3123]: E0120 01:52:15.418089 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.421563 kubelet[3123]: I0120 01:52:15.418137 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/eeb09d5e-8a63-4fca-910b-ea49fa1ecf05-socket-dir\") pod \"csi-node-driver-x6f5h\" (UID: \"eeb09d5e-8a63-4fca-910b-ea49fa1ecf05\") " pod="calico-system/csi-node-driver-x6f5h" Jan 20 01:52:15.464354 kubelet[3123]: E0120 01:52:15.447415 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.464354 kubelet[3123]: W0120 01:52:15.454560 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.464354 kubelet[3123]: E0120 01:52:15.454609 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.520257 kubelet[3123]: E0120 01:52:15.517585 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.520257 kubelet[3123]: W0120 01:52:15.517633 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.520257 kubelet[3123]: E0120 01:52:15.517665 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.520257 kubelet[3123]: E0120 01:52:15.518251 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.520257 kubelet[3123]: W0120 01:52:15.518270 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.520257 kubelet[3123]: E0120 01:52:15.518295 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.520257 kubelet[3123]: E0120 01:52:15.518649 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.520257 kubelet[3123]: W0120 01:52:15.518665 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.520257 kubelet[3123]: E0120 01:52:15.518748 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.520257 kubelet[3123]: E0120 01:52:15.519078 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.521095 kubelet[3123]: W0120 01:52:15.519093 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.521095 kubelet[3123]: E0120 01:52:15.519110 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.565529 kubelet[3123]: E0120 01:52:15.564982 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.565529 kubelet[3123]: W0120 01:52:15.565060 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.565529 kubelet[3123]: E0120 01:52:15.565099 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.585907 kubelet[3123]: E0120 01:52:15.578805 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.585907 kubelet[3123]: W0120 01:52:15.578843 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.585907 kubelet[3123]: E0120 01:52:15.578873 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.598363 kubelet[3123]: E0120 01:52:15.595378 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.598363 kubelet[3123]: W0120 01:52:15.595425 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.598363 kubelet[3123]: E0120 01:52:15.595462 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.609857 kubelet[3123]: E0120 01:52:15.604069 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.610136 kubelet[3123]: W0120 01:52:15.610085 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.610479 kubelet[3123]: E0120 01:52:15.610446 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.618630 kubelet[3123]: E0120 01:52:15.618583 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.618904 kubelet[3123]: W0120 01:52:15.618871 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.623152 kubelet[3123]: E0120 01:52:15.623115 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.638014 kubelet[3123]: E0120 01:52:15.631905 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.638014 kubelet[3123]: W0120 01:52:15.631946 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.638014 kubelet[3123]: E0120 01:52:15.631990 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.638014 kubelet[3123]: E0120 01:52:15.632548 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.638014 kubelet[3123]: W0120 01:52:15.632567 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.638014 kubelet[3123]: E0120 01:52:15.632591 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.644015 kubelet[3123]: E0120 01:52:15.642827 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.644015 kubelet[3123]: W0120 01:52:15.642861 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.644015 kubelet[3123]: E0120 01:52:15.642895 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.647855 kubelet[3123]: E0120 01:52:15.644629 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.647855 kubelet[3123]: W0120 01:52:15.644656 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.647855 kubelet[3123]: E0120 01:52:15.644739 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.663743 kubelet[3123]: E0120 01:52:15.663524 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.663743 kubelet[3123]: W0120 01:52:15.663594 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.663743 kubelet[3123]: E0120 01:52:15.663631 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.688000 audit: BPF prog-id=149 op=LOAD Jan 20 01:52:15.693000 audit: BPF prog-id=150 op=LOAD Jan 20 01:52:15.693000 audit[3611]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=3597 pid=3611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:15.693000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635633165356263666538663834326236383132623534343238616632 Jan 20 01:52:15.693000 audit: BPF prog-id=150 op=UNLOAD Jan 20 01:52:15.693000 audit[3611]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3597 pid=3611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:15.693000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635633165356263666538663834326236383132623534343238616632 Jan 20 01:52:15.699000 audit: BPF prog-id=151 op=LOAD Jan 20 01:52:15.699000 audit[3611]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3597 pid=3611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:15.699000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635633165356263666538663834326236383132623534343238616632 Jan 20 01:52:15.700000 audit: BPF prog-id=152 op=LOAD Jan 20 01:52:15.700000 audit[3611]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=3597 pid=3611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:15.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635633165356263666538663834326236383132623534343238616632 Jan 20 01:52:15.700000 audit: BPF prog-id=152 op=UNLOAD Jan 20 01:52:15.700000 audit[3611]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3597 pid=3611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:15.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635633165356263666538663834326236383132623534343238616632 Jan 20 01:52:15.701000 audit: BPF prog-id=151 op=UNLOAD Jan 20 01:52:15.701000 audit[3611]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3597 pid=3611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:15.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635633165356263666538663834326236383132623534343238616632 Jan 20 01:52:15.701000 audit: BPF prog-id=153 op=LOAD Jan 20 01:52:15.701000 audit[3611]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=3597 pid=3611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:15.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635633165356263666538663834326236383132623534343238616632 Jan 20 01:52:15.734611 kubelet[3123]: E0120 01:52:15.708951 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.734611 kubelet[3123]: W0120 01:52:15.708989 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.734611 kubelet[3123]: E0120 01:52:15.709024 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.734611 kubelet[3123]: E0120 01:52:15.715797 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.734611 kubelet[3123]: W0120 01:52:15.715824 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.734611 kubelet[3123]: E0120 01:52:15.715855 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.734611 kubelet[3123]: E0120 01:52:15.722792 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.734611 kubelet[3123]: W0120 01:52:15.722822 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.734611 kubelet[3123]: E0120 01:52:15.722853 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.734611 kubelet[3123]: E0120 01:52:15.732561 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.735095 kubelet[3123]: W0120 01:52:15.732596 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.735095 kubelet[3123]: E0120 01:52:15.732628 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.744531 kubelet[3123]: E0120 01:52:15.740154 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.744531 kubelet[3123]: W0120 01:52:15.744331 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.744531 kubelet[3123]: E0120 01:52:15.744376 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.746491 kubelet[3123]: E0120 01:52:15.744985 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.746491 kubelet[3123]: W0120 01:52:15.745038 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.746491 kubelet[3123]: E0120 01:52:15.745062 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.746491 kubelet[3123]: E0120 01:52:15.745507 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.746491 kubelet[3123]: W0120 01:52:15.745520 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.746491 kubelet[3123]: E0120 01:52:15.745540 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.746491 kubelet[3123]: E0120 01:52:15.745918 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.746491 kubelet[3123]: W0120 01:52:15.745931 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.746491 kubelet[3123]: E0120 01:52:15.745945 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.763967 kubelet[3123]: E0120 01:52:15.763494 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.763967 kubelet[3123]: W0120 01:52:15.763545 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.763967 kubelet[3123]: E0120 01:52:15.763582 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.791243 kubelet[3123]: E0120 01:52:15.791088 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.791398 kubelet[3123]: W0120 01:52:15.791369 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.791496 kubelet[3123]: E0120 01:52:15.791477 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.792009 kubelet[3123]: E0120 01:52:15.791989 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.792295 kubelet[3123]: W0120 01:52:15.792092 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.792295 kubelet[3123]: E0120 01:52:15.792119 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.809277 kubelet[3123]: E0120 01:52:15.809228 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.827527 kubelet[3123]: W0120 01:52:15.823120 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.827527 kubelet[3123]: E0120 01:52:15.823243 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.835375 kubelet[3123]: E0120 01:52:15.831550 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.835375 kubelet[3123]: W0120 01:52:15.831595 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.835375 kubelet[3123]: E0120 01:52:15.831632 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.854733 kubelet[3123]: E0120 01:52:15.851162 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.854733 kubelet[3123]: W0120 01:52:15.853397 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.854733 kubelet[3123]: E0120 01:52:15.853435 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.854733 kubelet[3123]: E0120 01:52:15.854039 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.854733 kubelet[3123]: W0120 01:52:15.854056 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.854733 kubelet[3123]: E0120 01:52:15.854079 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.886965 kubelet[3123]: E0120 01:52:15.866966 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.886965 kubelet[3123]: W0120 01:52:15.867040 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.886965 kubelet[3123]: E0120 01:52:15.867077 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.919228 kubelet[3123]: E0120 01:52:15.917536 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.919228 kubelet[3123]: W0120 01:52:15.917769 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.919228 kubelet[3123]: E0120 01:52:15.918048 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.928461 kubelet[3123]: E0120 01:52:15.926115 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.928461 kubelet[3123]: W0120 01:52:15.926225 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.928461 kubelet[3123]: E0120 01:52:15.926267 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.943040 kubelet[3123]: E0120 01:52:15.937503 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.943040 kubelet[3123]: W0120 01:52:15.937636 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.943040 kubelet[3123]: E0120 01:52:15.937859 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:15.961259 kubelet[3123]: E0120 01:52:15.961167 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:15.961536 kubelet[3123]: W0120 01:52:15.961501 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:15.961747 kubelet[3123]: E0120 01:52:15.961663 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:16.188982 kubelet[3123]: E0120 01:52:16.188935 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:16.189377 kubelet[3123]: W0120 01:52:16.189340 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:16.195794 containerd[1643]: time="2026-01-20T01:52:16.191042692Z" level=info msg="connecting to shim 90ef88d76c0030b947585dd9d3d8ec7e21848368d8abdfa9e4690c0429bf7b2a" address="unix:///run/containerd/s/9dd5e4039d6f5b3712b1257a02176431d1406dc965366ea78fa428141f709a9a" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:52:16.201412 kubelet[3123]: E0120 01:52:16.196024 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:16.234393 containerd[1643]: time="2026-01-20T01:52:16.234337903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76ffb8b7db-l9ggx,Uid:39ca85b3-5f40-48f7-a575-5e7142bfafdf,Namespace:calico-system,Attempt:0,} returns sandbox id \"f5c1e5bcfe8f842b6812b54428af21037ef0d11d08ca26babd0661209f171f99\"" Jan 20 01:52:16.311596 kubelet[3123]: E0120 01:52:16.310759 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:16.356768 containerd[1643]: time="2026-01-20T01:52:16.351769996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 20 01:52:16.812885 kubelet[3123]: E0120 01:52:16.806919 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:52:16.884508 systemd[1]: Started cri-containerd-90ef88d76c0030b947585dd9d3d8ec7e21848368d8abdfa9e4690c0429bf7b2a.scope - libcontainer container 90ef88d76c0030b947585dd9d3d8ec7e21848368d8abdfa9e4690c0429bf7b2a. Jan 20 01:52:17.148000 audit: BPF prog-id=154 op=LOAD Jan 20 01:52:17.153000 audit: BPF prog-id=155 op=LOAD Jan 20 01:52:17.153000 audit[3743]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3731 pid=3743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:17.153000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930656638386437366330303330623934373538356464396433643865 Jan 20 01:52:17.153000 audit: BPF prog-id=155 op=UNLOAD Jan 20 01:52:17.153000 audit[3743]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3731 pid=3743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:17.153000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930656638386437366330303330623934373538356464396433643865 Jan 20 01:52:17.153000 audit: BPF prog-id=156 op=LOAD Jan 20 01:52:17.153000 audit[3743]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3731 pid=3743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:17.153000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930656638386437366330303330623934373538356464396433643865 Jan 20 01:52:17.163000 audit: BPF prog-id=157 op=LOAD Jan 20 01:52:17.163000 audit[3743]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3731 pid=3743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:17.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930656638386437366330303330623934373538356464396433643865 Jan 20 01:52:17.192000 audit: BPF prog-id=157 op=UNLOAD Jan 20 01:52:17.192000 audit[3743]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3731 pid=3743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:17.192000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930656638386437366330303330623934373538356464396433643865 Jan 20 01:52:17.192000 audit: BPF prog-id=156 op=UNLOAD Jan 20 01:52:17.192000 audit[3743]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3731 pid=3743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:17.192000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930656638386437366330303330623934373538356464396433643865 Jan 20 01:52:17.192000 audit: BPF prog-id=158 op=LOAD Jan 20 01:52:17.192000 audit[3743]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3731 pid=3743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:17.192000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930656638386437366330303330623934373538356464396433643865 Jan 20 01:52:17.553047 containerd[1643]: time="2026-01-20T01:52:17.552596971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s9z5p,Uid:1346b44e-5bca-45f4-a2e0-4c89e7aca64f,Namespace:calico-system,Attempt:0,} returns sandbox id \"90ef88d76c0030b947585dd9d3d8ec7e21848368d8abdfa9e4690c0429bf7b2a\"" Jan 20 01:52:17.555965 kubelet[3123]: E0120 01:52:17.553883 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:17.814836 kubelet[3123]: E0120 01:52:17.814632 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:17.907776 kubelet[3123]: E0120 01:52:17.907629 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:17.908149 kubelet[3123]: W0120 01:52:17.907770 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:17.908149 kubelet[3123]: E0120 01:52:17.907892 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:17.918262 kubelet[3123]: E0120 01:52:17.915238 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:17.921756 kubelet[3123]: W0120 01:52:17.921635 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:17.921940 kubelet[3123]: E0120 01:52:17.921914 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:17.932969 kubelet[3123]: E0120 01:52:17.932546 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:17.932969 kubelet[3123]: W0120 01:52:17.932598 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:17.932969 kubelet[3123]: E0120 01:52:17.932637 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:17.937945 kubelet[3123]: E0120 01:52:17.937809 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:17.938885 kubelet[3123]: W0120 01:52:17.938662 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:17.944948 kubelet[3123]: E0120 01:52:17.939005 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:17.952071 kubelet[3123]: E0120 01:52:17.951955 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:17.952753 kubelet[3123]: W0120 01:52:17.952356 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:17.952753 kubelet[3123]: E0120 01:52:17.952581 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:17.954947 kubelet[3123]: E0120 01:52:17.954922 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:17.955066 kubelet[3123]: W0120 01:52:17.955047 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:17.955277 kubelet[3123]: E0120 01:52:17.955252 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:17.955987 kubelet[3123]: E0120 01:52:17.955859 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:17.955987 kubelet[3123]: W0120 01:52:17.955878 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:17.955987 kubelet[3123]: E0120 01:52:17.955899 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:17.960900 kubelet[3123]: E0120 01:52:17.960854 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:17.963757 kubelet[3123]: W0120 01:52:17.961010 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:17.963757 kubelet[3123]: E0120 01:52:17.961050 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:17.964260 kubelet[3123]: E0120 01:52:17.964007 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:17.964260 kubelet[3123]: W0120 01:52:17.964032 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:17.964260 kubelet[3123]: E0120 01:52:17.964055 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:17.968122 kubelet[3123]: E0120 01:52:17.967885 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:17.968122 kubelet[3123]: W0120 01:52:17.967910 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:17.968122 kubelet[3123]: E0120 01:52:17.967934 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:17.982622 kubelet[3123]: E0120 01:52:17.982567 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:17.990000 kubelet[3123]: W0120 01:52:17.984834 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:17.990000 kubelet[3123]: E0120 01:52:17.984896 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:17.992234 kubelet[3123]: E0120 01:52:17.991080 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:17.992234 kubelet[3123]: W0120 01:52:17.991138 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:17.992234 kubelet[3123]: E0120 01:52:17.991174 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:18.006762 kubelet[3123]: E0120 01:52:18.002501 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:18.006762 kubelet[3123]: W0120 01:52:18.002564 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:18.006762 kubelet[3123]: E0120 01:52:18.002602 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:18.006762 kubelet[3123]: E0120 01:52:18.005219 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:18.006762 kubelet[3123]: W0120 01:52:18.005240 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:18.006762 kubelet[3123]: E0120 01:52:18.005270 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:18.027897 kubelet[3123]: E0120 01:52:18.027601 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:18.027897 kubelet[3123]: W0120 01:52:18.027652 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:18.027897 kubelet[3123]: E0120 01:52:18.027752 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:18.028602 kubelet[3123]: E0120 01:52:18.028578 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:18.028915 kubelet[3123]: W0120 01:52:18.028768 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:18.028915 kubelet[3123]: E0120 01:52:18.028800 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:18.029263 kubelet[3123]: E0120 01:52:18.029248 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:18.029465 kubelet[3123]: W0120 01:52:18.029373 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:18.029465 kubelet[3123]: E0120 01:52:18.029399 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:18.029906 kubelet[3123]: E0120 01:52:18.029889 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:18.030044 kubelet[3123]: W0120 01:52:18.029968 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:18.030044 kubelet[3123]: E0120 01:52:18.029988 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:18.038429 kubelet[3123]: E0120 01:52:18.038135 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:18.038429 kubelet[3123]: W0120 01:52:18.038222 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:18.038429 kubelet[3123]: E0120 01:52:18.038257 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:18.054398 kubelet[3123]: E0120 01:52:18.051751 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:18.054398 kubelet[3123]: W0120 01:52:18.051807 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:18.054398 kubelet[3123]: E0120 01:52:18.051843 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:18.059523 kubelet[3123]: E0120 01:52:18.056881 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:18.059523 kubelet[3123]: W0120 01:52:18.056907 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:18.059523 kubelet[3123]: E0120 01:52:18.056936 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:18.065793 kubelet[3123]: E0120 01:52:18.061034 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:18.065793 kubelet[3123]: W0120 01:52:18.061059 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:18.065793 kubelet[3123]: E0120 01:52:18.061086 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:18.065793 kubelet[3123]: E0120 01:52:18.062806 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:18.065793 kubelet[3123]: W0120 01:52:18.062832 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:18.065793 kubelet[3123]: E0120 01:52:18.062855 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:18.065793 kubelet[3123]: E0120 01:52:18.064061 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:18.065793 kubelet[3123]: W0120 01:52:18.064080 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:18.065793 kubelet[3123]: E0120 01:52:18.064189 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:18.082862 kubelet[3123]: E0120 01:52:18.073215 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:18.082862 kubelet[3123]: W0120 01:52:18.073252 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:18.082862 kubelet[3123]: E0120 01:52:18.073282 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:18.621266 kubelet[3123]: E0120 01:52:18.620796 3123 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:52:18.882576 kubelet[3123]: E0120 01:52:18.880270 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:52:18.960551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1360721348.mount: Deactivated successfully. Jan 20 01:52:19.857585 kubelet[3123]: E0120 01:52:19.851669 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:19.866467 kubelet[3123]: E0120 01:52:19.865579 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:19.898802 kubelet[3123]: E0120 01:52:19.895957 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:19.898802 kubelet[3123]: W0120 01:52:19.896011 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:19.898802 kubelet[3123]: E0120 01:52:19.896062 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:19.910478 kubelet[3123]: E0120 01:52:19.908796 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:19.910478 kubelet[3123]: W0120 01:52:19.908863 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:19.910478 kubelet[3123]: E0120 01:52:19.908904 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:19.919175 kubelet[3123]: E0120 01:52:19.917109 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:19.919175 kubelet[3123]: W0120 01:52:19.917143 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:19.919175 kubelet[3123]: E0120 01:52:19.917173 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:19.932534 kubelet[3123]: E0120 01:52:19.926344 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:19.932534 kubelet[3123]: W0120 01:52:19.929858 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:19.932534 kubelet[3123]: E0120 01:52:19.929912 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:19.934526 kubelet[3123]: E0120 01:52:19.933930 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:19.934526 kubelet[3123]: W0120 01:52:19.933954 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:19.934526 kubelet[3123]: E0120 01:52:19.933984 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:19.941582 kubelet[3123]: E0120 01:52:19.941148 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:19.947618 kubelet[3123]: W0120 01:52:19.941191 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:19.947618 kubelet[3123]: E0120 01:52:19.942636 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:19.948282 kubelet[3123]: E0120 01:52:19.948251 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:19.948444 kubelet[3123]: W0120 01:52:19.948382 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:19.948581 kubelet[3123]: E0120 01:52:19.948559 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:19.949120 kubelet[3123]: E0120 01:52:19.949100 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:19.949200 kubelet[3123]: W0120 01:52:19.949184 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:19.949305 kubelet[3123]: E0120 01:52:19.949286 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:19.957049 kubelet[3123]: E0120 01:52:19.957012 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:19.957220 kubelet[3123]: W0120 01:52:19.957197 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:19.957327 kubelet[3123]: E0120 01:52:19.957306 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:19.969157 kubelet[3123]: E0120 01:52:19.969111 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:19.969331 kubelet[3123]: W0120 01:52:19.969311 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:19.980559 kubelet[3123]: E0120 01:52:19.978244 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:19.981276 kubelet[3123]: E0120 01:52:19.981212 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:19.981514 kubelet[3123]: W0120 01:52:19.981395 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:19.981740 kubelet[3123]: E0120 01:52:19.981642 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:19.982665 kubelet[3123]: E0120 01:52:19.982643 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:19.982845 kubelet[3123]: W0120 01:52:19.982825 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:19.983025 kubelet[3123]: E0120 01:52:19.983005 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:19.984344 kubelet[3123]: E0120 01:52:19.984324 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:19.984908 kubelet[3123]: W0120 01:52:19.984770 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:19.985274 kubelet[3123]: E0120 01:52:19.985248 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:19.986547 kubelet[3123]: E0120 01:52:19.986522 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:19.986649 kubelet[3123]: W0120 01:52:19.986630 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:19.986852 kubelet[3123]: E0120 01:52:19.986829 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:19.991911 kubelet[3123]: E0120 01:52:19.991881 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:19.992063 kubelet[3123]: W0120 01:52:19.992042 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:19.992259 kubelet[3123]: E0120 01:52:19.992238 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:19.996959 kubelet[3123]: E0120 01:52:19.996933 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:19.997341 kubelet[3123]: W0120 01:52:19.997106 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:19.997341 kubelet[3123]: E0120 01:52:19.997169 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:19.999018 kubelet[3123]: E0120 01:52:19.998886 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:19.999018 kubelet[3123]: W0120 01:52:19.998901 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:19.999018 kubelet[3123]: E0120 01:52:19.999001 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:20.001303 kubelet[3123]: E0120 01:52:20.001007 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:20.001303 kubelet[3123]: W0120 01:52:20.001024 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:20.001303 kubelet[3123]: E0120 01:52:20.001042 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:20.004669 kubelet[3123]: E0120 01:52:20.004610 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:20.004669 kubelet[3123]: W0120 01:52:20.004630 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:20.004669 kubelet[3123]: E0120 01:52:20.004648 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:20.004669 kubelet[3123]: E0120 01:52:20.006975 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:20.004669 kubelet[3123]: W0120 01:52:20.006990 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:20.004669 kubelet[3123]: E0120 01:52:20.007005 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:20.008888 kubelet[3123]: E0120 01:52:20.008834 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:20.008888 kubelet[3123]: W0120 01:52:20.008848 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:20.008888 kubelet[3123]: E0120 01:52:20.008863 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:20.013933 kubelet[3123]: E0120 01:52:20.013741 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:20.013933 kubelet[3123]: W0120 01:52:20.013767 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:20.013933 kubelet[3123]: E0120 01:52:20.013787 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:20.018822 kubelet[3123]: E0120 01:52:20.018783 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:20.018822 kubelet[3123]: W0120 01:52:20.018810 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:20.018981 kubelet[3123]: E0120 01:52:20.018837 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:20.025615 kubelet[3123]: E0120 01:52:20.024882 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:20.025615 kubelet[3123]: W0120 01:52:20.024993 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:20.025615 kubelet[3123]: E0120 01:52:20.025022 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:20.028374 kubelet[3123]: E0120 01:52:20.026870 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:20.028374 kubelet[3123]: W0120 01:52:20.026906 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:20.028374 kubelet[3123]: E0120 01:52:20.026931 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:20.028374 kubelet[3123]: E0120 01:52:20.028170 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:20.028374 kubelet[3123]: W0120 01:52:20.028186 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:20.028374 kubelet[3123]: E0120 01:52:20.028202 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:20.034276 kubelet[3123]: E0120 01:52:20.030818 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:20.034276 kubelet[3123]: W0120 01:52:20.030839 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:20.034276 kubelet[3123]: E0120 01:52:20.030858 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:20.034276 kubelet[3123]: E0120 01:52:20.033048 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:20.034276 kubelet[3123]: W0120 01:52:20.033063 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:20.034276 kubelet[3123]: E0120 01:52:20.033079 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:20.036823 kubelet[3123]: E0120 01:52:20.036158 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:20.036823 kubelet[3123]: W0120 01:52:20.036173 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:20.036823 kubelet[3123]: E0120 01:52:20.036188 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:20.038814 kubelet[3123]: E0120 01:52:20.038783 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:20.039022 kubelet[3123]: W0120 01:52:20.039004 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:20.039177 kubelet[3123]: E0120 01:52:20.039141 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:20.041841 kubelet[3123]: E0120 01:52:20.041813 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:20.041941 kubelet[3123]: W0120 01:52:20.041920 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:20.042030 kubelet[3123]: E0120 01:52:20.042012 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:20.044833 kubelet[3123]: E0120 01:52:20.043909 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:20.044833 kubelet[3123]: W0120 01:52:20.043947 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:20.044833 kubelet[3123]: E0120 01:52:20.043967 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:20.045226 kubelet[3123]: E0120 01:52:20.045202 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:20.045317 kubelet[3123]: W0120 01:52:20.045297 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:20.049756 kubelet[3123]: E0120 01:52:20.047793 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:20.049756 kubelet[3123]: E0120 01:52:20.048858 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:20.049756 kubelet[3123]: W0120 01:52:20.048872 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:20.049756 kubelet[3123]: E0120 01:52:20.048886 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:20.056247 kubelet[3123]: E0120 01:52:20.055060 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:20.056247 kubelet[3123]: W0120 01:52:20.055116 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:20.056247 kubelet[3123]: E0120 01:52:20.055145 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:20.094837 kubelet[3123]: E0120 01:52:20.094787 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:20.095269 kubelet[3123]: W0120 01:52:20.095066 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:20.095269 kubelet[3123]: E0120 01:52:20.095189 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:20.106366 kubelet[3123]: E0120 01:52:20.106246 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:20.114234 kubelet[3123]: W0120 01:52:20.113583 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:20.114234 kubelet[3123]: E0120 01:52:20.113771 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:20.117926 kubelet[3123]: E0120 01:52:20.116281 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:20.117926 kubelet[3123]: W0120 01:52:20.116309 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:20.117926 kubelet[3123]: E0120 01:52:20.116336 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:20.119498 kubelet[3123]: E0120 01:52:20.119468 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:20.119609 kubelet[3123]: W0120 01:52:20.119587 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:20.119779 kubelet[3123]: E0120 01:52:20.119752 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:20.125114 kubelet[3123]: E0120 01:52:20.125073 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:20.125264 kubelet[3123]: W0120 01:52:20.125243 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:20.128195 kubelet[3123]: E0120 01:52:20.128162 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:20.135046 kubelet[3123]: E0120 01:52:20.134873 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:20.135280 kubelet[3123]: W0120 01:52:20.135195 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:20.135383 kubelet[3123]: E0120 01:52:20.135233 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:20.148976 kubelet[3123]: E0120 01:52:20.148923 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:20.149325 kubelet[3123]: W0120 01:52:20.149159 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:20.149325 kubelet[3123]: E0120 01:52:20.149199 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:20.156139 kubelet[3123]: E0120 01:52:20.156051 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:20.156139 kubelet[3123]: W0120 01:52:20.156085 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:20.156139 kubelet[3123]: E0120 01:52:20.156113 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:20.162238 kubelet[3123]: E0120 01:52:20.160398 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:20.162238 kubelet[3123]: W0120 01:52:20.160855 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:20.162238 kubelet[3123]: E0120 01:52:20.160888 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:20.166493 kubelet[3123]: E0120 01:52:20.164499 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:20.166493 kubelet[3123]: W0120 01:52:20.164524 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:20.166493 kubelet[3123]: E0120 01:52:20.164549 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:20.174149 kubelet[3123]: E0120 01:52:20.167995 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:20.174149 kubelet[3123]: W0120 01:52:20.168038 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:20.174149 kubelet[3123]: E0120 01:52:20.168064 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:20.187969 kubelet[3123]: E0120 01:52:20.186323 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:20.187969 kubelet[3123]: W0120 01:52:20.186372 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:20.187969 kubelet[3123]: E0120 01:52:20.186408 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:20.778793 kubelet[3123]: E0120 01:52:20.770768 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:52:25.442004 kubelet[3123]: E0120 01:52:25.436834 3123 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:52:25.442004 kubelet[3123]: E0120 01:52:25.439585 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:52:25.841546 kubelet[3123]: E0120 01:52:25.803031 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:25.940664 kubelet[3123]: E0120 01:52:25.940620 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:25.946596 kubelet[3123]: W0120 01:52:25.945995 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:25.946596 kubelet[3123]: E0120 01:52:25.946049 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:25.946963 kubelet[3123]: E0120 01:52:25.946941 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:25.947080 kubelet[3123]: W0120 01:52:25.947058 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:25.947183 kubelet[3123]: E0120 01:52:25.947162 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:25.951284 kubelet[3123]: E0120 01:52:25.951250 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:25.951461 kubelet[3123]: W0120 01:52:25.951436 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:25.951573 kubelet[3123]: E0120 01:52:25.951550 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:25.965107 kubelet[3123]: E0120 01:52:25.954939 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:25.965107 kubelet[3123]: W0120 01:52:25.964757 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:25.965107 kubelet[3123]: E0120 01:52:25.964843 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:25.999924 kubelet[3123]: E0120 01:52:25.999660 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:26.004742 kubelet[3123]: W0120 01:52:26.004629 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:26.005409 kubelet[3123]: E0120 01:52:26.005174 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:26.795204 kubelet[3123]: E0120 01:52:26.769487 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:52:28.770271 kubelet[3123]: E0120 01:52:28.768525 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:52:30.461556 kubelet[3123]: E0120 01:52:30.458908 3123 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:52:30.777660 kubelet[3123]: E0120 01:52:30.768337 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:52:32.787956 kubelet[3123]: E0120 01:52:32.787434 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:52:33.853253 kubelet[3123]: E0120 01:52:33.852766 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:52:35.474859 kubelet[3123]: E0120 01:52:35.469534 3123 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:52:35.807905 kubelet[3123]: E0120 01:52:35.807782 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:52:36.407616 containerd[1643]: time="2026-01-20T01:52:36.406595631Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:52:36.426001 containerd[1643]: time="2026-01-20T01:52:36.423302033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 20 01:52:36.431908 containerd[1643]: time="2026-01-20T01:52:36.431835882Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:52:36.442773 containerd[1643]: time="2026-01-20T01:52:36.442431874Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:52:36.443603 containerd[1643]: time="2026-01-20T01:52:36.443562360Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 20.079690501s" Jan 20 01:52:36.443866 containerd[1643]: time="2026-01-20T01:52:36.443840588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 20 01:52:36.453644 containerd[1643]: time="2026-01-20T01:52:36.449661957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 20 01:52:36.481124 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Jan 20 01:52:36.560006 containerd[1643]: time="2026-01-20T01:52:36.558934219Z" level=info msg="CreateContainer within sandbox \"f5c1e5bcfe8f842b6812b54428af21037ef0d11d08ca26babd0661209f171f99\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 20 01:52:36.672415 containerd[1643]: time="2026-01-20T01:52:36.664351250Z" level=info msg="Container 7a4d7c91d9a22ffeac7848f0f3751830772e2f1dfa780eebd5bb2ee392da3ed1: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:52:36.745922 containerd[1643]: time="2026-01-20T01:52:36.743409778Z" level=info msg="CreateContainer within sandbox \"f5c1e5bcfe8f842b6812b54428af21037ef0d11d08ca26babd0661209f171f99\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7a4d7c91d9a22ffeac7848f0f3751830772e2f1dfa780eebd5bb2ee392da3ed1\"" Jan 20 01:52:36.751742 containerd[1643]: time="2026-01-20T01:52:36.750968507Z" level=info msg="StartContainer for \"7a4d7c91d9a22ffeac7848f0f3751830772e2f1dfa780eebd5bb2ee392da3ed1\"" Jan 20 01:52:36.767256 containerd[1643]: time="2026-01-20T01:52:36.764651582Z" level=info msg="connecting to shim 7a4d7c91d9a22ffeac7848f0f3751830772e2f1dfa780eebd5bb2ee392da3ed1" address="unix:///run/containerd/s/0911ec49f7a6bc811a7a4cff8080ec2f9d3035d84454ae8e04aff4bdd78ace46" protocol=ttrpc version=3 Jan 20 01:52:36.768131 systemd-tmpfiles[3854]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 01:52:36.778868 systemd-tmpfiles[3854]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 01:52:36.779621 systemd-tmpfiles[3854]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 01:52:36.806835 systemd-tmpfiles[3854]: ACLs are not supported, ignoring. Jan 20 01:52:36.807027 systemd-tmpfiles[3854]: ACLs are not supported, ignoring. Jan 20 01:52:36.889068 systemd-tmpfiles[3854]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 01:52:36.889297 systemd-tmpfiles[3854]: Skipping /boot Jan 20 01:52:36.926126 systemd[1]: Started cri-containerd-7a4d7c91d9a22ffeac7848f0f3751830772e2f1dfa780eebd5bb2ee392da3ed1.scope - libcontainer container 7a4d7c91d9a22ffeac7848f0f3751830772e2f1dfa780eebd5bb2ee392da3ed1. Jan 20 01:52:36.983037 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Jan 20 01:52:36.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-clean comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:52:36.987859 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Jan 20 01:52:37.028916 kernel: kauditd_printk_skb: 52 callbacks suppressed Jan 20 01:52:37.033851 kernel: audit: type=1130 audit(1768873956.987:567): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-clean comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:52:37.049787 kernel: audit: type=1131 audit(1768873956.987:568): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-clean comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:52:36.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-clean comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:52:37.148000 audit: BPF prog-id=159 op=LOAD Jan 20 01:52:37.176236 kernel: audit: type=1334 audit(1768873957.148:569): prog-id=159 op=LOAD Jan 20 01:52:37.176755 kernel: audit: type=1334 audit(1768873957.148:570): prog-id=160 op=LOAD Jan 20 01:52:37.148000 audit: BPF prog-id=160 op=LOAD Jan 20 01:52:37.148000 audit[3858]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3597 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:37.224661 kernel: audit: type=1300 audit(1768873957.148:570): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3597 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:37.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761346437633931643961323266666561633738343866306633373531 Jan 20 01:52:37.148000 audit: BPF prog-id=160 op=UNLOAD Jan 20 01:52:37.294439 kernel: audit: type=1327 audit(1768873957.148:570): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761346437633931643961323266666561633738343866306633373531 Jan 20 01:52:37.294649 kernel: audit: type=1334 audit(1768873957.148:571): prog-id=160 op=UNLOAD Jan 20 01:52:37.148000 audit[3858]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3597 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:37.358468 kernel: audit: type=1300 audit(1768873957.148:571): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3597 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:37.358778 kernel: audit: type=1327 audit(1768873957.148:571): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761346437633931643961323266666561633738343866306633373531 Jan 20 01:52:37.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761346437633931643961323266666561633738343866306633373531 Jan 20 01:52:37.148000 audit: BPF prog-id=161 op=LOAD Jan 20 01:52:37.148000 audit[3858]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3597 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:37.444994 kernel: audit: type=1334 audit(1768873957.148:572): prog-id=161 op=LOAD Jan 20 01:52:37.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761346437633931643961323266666561633738343866306633373531 Jan 20 01:52:37.159000 audit: BPF prog-id=162 op=LOAD Jan 20 01:52:37.159000 audit[3858]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3597 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:37.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761346437633931643961323266666561633738343866306633373531 Jan 20 01:52:37.159000 audit: BPF prog-id=162 op=UNLOAD Jan 20 01:52:37.159000 audit[3858]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3597 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:37.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761346437633931643961323266666561633738343866306633373531 Jan 20 01:52:37.159000 audit: BPF prog-id=161 op=UNLOAD Jan 20 01:52:37.159000 audit[3858]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3597 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:37.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761346437633931643961323266666561633738343866306633373531 Jan 20 01:52:37.159000 audit: BPF prog-id=163 op=LOAD Jan 20 01:52:37.159000 audit[3858]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3597 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:37.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761346437633931643961323266666561633738343866306633373531 Jan 20 01:52:37.676860 containerd[1643]: time="2026-01-20T01:52:37.676241598Z" level=info msg="StartContainer for \"7a4d7c91d9a22ffeac7848f0f3751830772e2f1dfa780eebd5bb2ee392da3ed1\" returns successfully" Jan 20 01:52:37.780190 kubelet[3123]: E0120 01:52:37.775432 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:52:38.191243 kubelet[3123]: E0120 01:52:38.186955 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:38.341907 kubelet[3123]: E0120 01:52:38.335988 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:38.341907 kubelet[3123]: W0120 01:52:38.336033 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:38.341907 kubelet[3123]: E0120 01:52:38.336067 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:38.341907 kubelet[3123]: E0120 01:52:38.340918 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:38.341907 kubelet[3123]: W0120 01:52:38.340952 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:38.341907 kubelet[3123]: E0120 01:52:38.340988 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:38.347371 kubelet[3123]: E0120 01:52:38.345442 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:38.347371 kubelet[3123]: W0120 01:52:38.345477 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:38.347371 kubelet[3123]: E0120 01:52:38.345561 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:38.352451 kubelet[3123]: E0120 01:52:38.352347 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:38.352451 kubelet[3123]: W0120 01:52:38.352382 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:38.352451 kubelet[3123]: E0120 01:52:38.352413 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:38.384128 kubelet[3123]: E0120 01:52:38.383771 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:38.384128 kubelet[3123]: W0120 01:52:38.383824 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:38.384128 kubelet[3123]: E0120 01:52:38.383859 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:38.434916 kubelet[3123]: E0120 01:52:38.427818 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:38.434916 kubelet[3123]: W0120 01:52:38.427867 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:38.434916 kubelet[3123]: E0120 01:52:38.427904 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:38.439167 kubelet[3123]: E0120 01:52:38.439133 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:38.439260 kubelet[3123]: W0120 01:52:38.439239 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:38.439342 kubelet[3123]: E0120 01:52:38.439324 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:38.452366 kubelet[3123]: E0120 01:52:38.451907 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:38.452366 kubelet[3123]: W0120 01:52:38.451958 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:38.452366 kubelet[3123]: E0120 01:52:38.451997 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:38.452806 kubelet[3123]: E0120 01:52:38.452464 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:38.452806 kubelet[3123]: W0120 01:52:38.452481 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:38.452806 kubelet[3123]: E0120 01:52:38.452506 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:38.478627 kubelet[3123]: E0120 01:52:38.478573 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:38.481061 kubelet[3123]: W0120 01:52:38.480846 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:38.481061 kubelet[3123]: E0120 01:52:38.480894 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:38.490363 kubelet[3123]: E0120 01:52:38.490027 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:38.490363 kubelet[3123]: W0120 01:52:38.490070 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:38.490363 kubelet[3123]: E0120 01:52:38.490104 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:38.496564 kubelet[3123]: E0120 01:52:38.495944 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:38.496564 kubelet[3123]: W0120 01:52:38.495978 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:38.496564 kubelet[3123]: E0120 01:52:38.496007 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:38.506391 kubelet[3123]: E0120 01:52:38.506215 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:38.506999 kubelet[3123]: W0120 01:52:38.506811 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:38.506999 kubelet[3123]: E0120 01:52:38.506865 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:38.507566 kubelet[3123]: E0120 01:52:38.507543 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:38.519206 kubelet[3123]: W0120 01:52:38.518962 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:38.519206 kubelet[3123]: E0120 01:52:38.519041 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:38.524180 kubelet[3123]: E0120 01:52:38.524134 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:38.524338 kubelet[3123]: W0120 01:52:38.524312 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:38.524453 kubelet[3123]: E0120 01:52:38.524429 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:38.542763 kubelet[3123]: E0120 01:52:38.539575 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:38.542763 kubelet[3123]: W0120 01:52:38.539927 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:38.542763 kubelet[3123]: E0120 01:52:38.539962 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:38.542763 kubelet[3123]: I0120 01:52:38.542439 3123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-76ffb8b7db-l9ggx" podStartSLOduration=6.438635573 podStartE2EDuration="26.542420103s" podCreationTimestamp="2026-01-20 01:52:12 +0000 UTC" firstStartedPulling="2026-01-20 01:52:16.344998939 +0000 UTC m=+150.793447644" lastFinishedPulling="2026-01-20 01:52:36.44878347 +0000 UTC m=+170.897232174" observedRunningTime="2026-01-20 01:52:38.470260523 +0000 UTC m=+172.918709248" watchObservedRunningTime="2026-01-20 01:52:38.542420103 +0000 UTC m=+172.990868828" Jan 20 01:52:38.585794 kubelet[3123]: E0120 01:52:38.583418 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:38.585794 kubelet[3123]: W0120 01:52:38.583460 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:38.585794 kubelet[3123]: E0120 01:52:38.583491 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:38.672305 kubelet[3123]: E0120 01:52:38.662451 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:38.715234 kubelet[3123]: W0120 01:52:38.671618 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:38.715234 kubelet[3123]: E0120 01:52:38.685130 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:38.734750 kubelet[3123]: E0120 01:52:38.724418 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:38.734750 kubelet[3123]: W0120 01:52:38.724467 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:38.734750 kubelet[3123]: E0120 01:52:38.724504 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:38.749937 kubelet[3123]: E0120 01:52:38.744188 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:38.749937 kubelet[3123]: W0120 01:52:38.744273 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:38.749937 kubelet[3123]: E0120 01:52:38.744311 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:38.779491 kubelet[3123]: E0120 01:52:38.756591 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:38.779491 kubelet[3123]: W0120 01:52:38.756764 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:38.779491 kubelet[3123]: E0120 01:52:38.756805 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:38.779491 kubelet[3123]: E0120 01:52:38.759884 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:38.779491 kubelet[3123]: W0120 01:52:38.759910 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:38.779491 kubelet[3123]: E0120 01:52:38.759941 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:38.779491 kubelet[3123]: E0120 01:52:38.760794 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:38.779491 kubelet[3123]: W0120 01:52:38.760813 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:38.779491 kubelet[3123]: E0120 01:52:38.760835 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:38.779491 kubelet[3123]: E0120 01:52:38.761347 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:38.780215 kubelet[3123]: W0120 01:52:38.761366 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:38.780215 kubelet[3123]: E0120 01:52:38.761387 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:38.780215 kubelet[3123]: E0120 01:52:38.761930 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:38.780215 kubelet[3123]: W0120 01:52:38.761946 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:38.780215 kubelet[3123]: E0120 01:52:38.761975 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:38.780215 kubelet[3123]: E0120 01:52:38.774443 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:38.780215 kubelet[3123]: W0120 01:52:38.774481 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:38.780215 kubelet[3123]: E0120 01:52:38.774516 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:38.799278 kubelet[3123]: E0120 01:52:38.796281 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:38.799278 kubelet[3123]: W0120 01:52:38.796318 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:38.799278 kubelet[3123]: E0120 01:52:38.796354 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:38.807834 kubelet[3123]: E0120 01:52:38.805444 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:38.807834 kubelet[3123]: W0120 01:52:38.805531 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:38.807834 kubelet[3123]: E0120 01:52:38.805573 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:38.823231 kubelet[3123]: E0120 01:52:38.823134 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:38.823231 kubelet[3123]: W0120 01:52:38.823173 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:38.823231 kubelet[3123]: E0120 01:52:38.823199 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:38.843382 kubelet[3123]: E0120 01:52:38.843150 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:38.849381 kubelet[3123]: W0120 01:52:38.843284 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:38.849381 kubelet[3123]: E0120 01:52:38.843810 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:38.855275 kubelet[3123]: E0120 01:52:38.854770 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:38.855275 kubelet[3123]: W0120 01:52:38.854807 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:38.855275 kubelet[3123]: E0120 01:52:38.854839 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:38.868953 kubelet[3123]: E0120 01:52:38.868892 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:38.876240 kubelet[3123]: W0120 01:52:38.869207 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:38.876240 kubelet[3123]: E0120 01:52:38.869266 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:39.224823 kubelet[3123]: E0120 01:52:39.218084 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:39.224823 kubelet[3123]: W0120 01:52:39.218264 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:39.224823 kubelet[3123]: E0120 01:52:39.218598 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:39.375157 kubelet[3123]: E0120 01:52:39.366332 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:39.467264 kubelet[3123]: E0120 01:52:39.463139 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:39.467264 kubelet[3123]: W0120 01:52:39.463199 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:39.467264 kubelet[3123]: E0120 01:52:39.463240 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:39.487553 kubelet[3123]: E0120 01:52:39.485529 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:39.487553 kubelet[3123]: W0120 01:52:39.485579 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:39.487553 kubelet[3123]: E0120 01:52:39.485617 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:39.535450 kubelet[3123]: E0120 01:52:39.534025 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:39.535450 kubelet[3123]: W0120 01:52:39.534111 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:39.535450 kubelet[3123]: E0120 01:52:39.534144 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:39.566039 kubelet[3123]: E0120 01:52:39.539817 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:39.566039 kubelet[3123]: W0120 01:52:39.547232 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:39.566039 kubelet[3123]: E0120 01:52:39.547819 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:39.583185 kubelet[3123]: E0120 01:52:39.582450 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:39.606961 kubelet[3123]: W0120 01:52:39.605376 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:39.619401 kubelet[3123]: E0120 01:52:39.617183 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:39.621489 kubelet[3123]: E0120 01:52:39.621052 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:39.621489 kubelet[3123]: W0120 01:52:39.621205 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:39.621489 kubelet[3123]: E0120 01:52:39.621239 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:39.621489 kubelet[3123]: E0120 01:52:39.623108 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:39.621489 kubelet[3123]: W0120 01:52:39.623128 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:39.621489 kubelet[3123]: E0120 01:52:39.623152 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:39.637950 kubelet[3123]: E0120 01:52:39.624475 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:39.637950 kubelet[3123]: W0120 01:52:39.624494 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:39.637950 kubelet[3123]: E0120 01:52:39.624512 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:39.637950 kubelet[3123]: E0120 01:52:39.626042 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:39.637950 kubelet[3123]: W0120 01:52:39.626147 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:39.637950 kubelet[3123]: E0120 01:52:39.626176 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:39.637950 kubelet[3123]: E0120 01:52:39.631402 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:39.637950 kubelet[3123]: W0120 01:52:39.631427 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:39.637950 kubelet[3123]: E0120 01:52:39.631576 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:39.637950 kubelet[3123]: E0120 01:52:39.632923 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:39.638303 kubelet[3123]: W0120 01:52:39.632939 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:39.638303 kubelet[3123]: E0120 01:52:39.632959 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:39.638303 kubelet[3123]: E0120 01:52:39.633939 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:39.638303 kubelet[3123]: W0120 01:52:39.633955 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:39.638303 kubelet[3123]: E0120 01:52:39.633973 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:39.638303 kubelet[3123]: E0120 01:52:39.638076 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:39.638303 kubelet[3123]: W0120 01:52:39.638097 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:39.638303 kubelet[3123]: E0120 01:52:39.638121 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:39.676997 kubelet[3123]: E0120 01:52:39.638497 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:39.676997 kubelet[3123]: W0120 01:52:39.638513 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:39.676997 kubelet[3123]: E0120 01:52:39.638531 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:39.676997 kubelet[3123]: E0120 01:52:39.638983 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:39.676997 kubelet[3123]: W0120 01:52:39.638999 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:39.676997 kubelet[3123]: E0120 01:52:39.639013 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:39.676997 kubelet[3123]: E0120 01:52:39.648239 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:39.676997 kubelet[3123]: W0120 01:52:39.648287 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:39.676997 kubelet[3123]: E0120 01:52:39.648322 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:39.676997 kubelet[3123]: E0120 01:52:39.649202 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:39.677477 kubelet[3123]: W0120 01:52:39.649224 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:39.677477 kubelet[3123]: E0120 01:52:39.649247 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:39.677477 kubelet[3123]: E0120 01:52:39.649922 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:39.677477 kubelet[3123]: W0120 01:52:39.649939 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:39.677477 kubelet[3123]: E0120 01:52:39.649960 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:39.677477 kubelet[3123]: E0120 01:52:39.650370 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:39.677477 kubelet[3123]: W0120 01:52:39.650385 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:39.677477 kubelet[3123]: E0120 01:52:39.650400 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:39.677477 kubelet[3123]: E0120 01:52:39.668072 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:39.677477 kubelet[3123]: W0120 01:52:39.668102 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:39.677927 kubelet[3123]: E0120 01:52:39.668133 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:39.677927 kubelet[3123]: E0120 01:52:39.673849 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:39.677927 kubelet[3123]: W0120 01:52:39.674118 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:39.677927 kubelet[3123]: E0120 01:52:39.674278 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:39.701171 kubelet[3123]: E0120 01:52:39.699197 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:39.701171 kubelet[3123]: W0120 01:52:39.699233 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:39.701171 kubelet[3123]: E0120 01:52:39.699266 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:39.710085 kubelet[3123]: E0120 01:52:39.706981 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:39.710085 kubelet[3123]: W0120 01:52:39.707071 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:39.710085 kubelet[3123]: E0120 01:52:39.707109 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:39.714884 kubelet[3123]: E0120 01:52:39.713157 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:39.714884 kubelet[3123]: W0120 01:52:39.713237 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:39.719042 kubelet[3123]: E0120 01:52:39.717844 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:39.746900 kubelet[3123]: E0120 01:52:39.741568 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:39.746900 kubelet[3123]: W0120 01:52:39.741614 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:39.746900 kubelet[3123]: E0120 01:52:39.741650 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:39.756827 kubelet[3123]: E0120 01:52:39.756380 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:39.756827 kubelet[3123]: W0120 01:52:39.756431 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:39.756827 kubelet[3123]: E0120 01:52:39.756467 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:39.834455 kubelet[3123]: E0120 01:52:39.833863 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:39.834455 kubelet[3123]: W0120 01:52:39.833912 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:39.834455 kubelet[3123]: E0120 01:52:39.833946 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:39.841646 kubelet[3123]: E0120 01:52:39.835212 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:39.841646 kubelet[3123]: W0120 01:52:39.835239 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:39.841646 kubelet[3123]: E0120 01:52:39.835263 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:39.841646 kubelet[3123]: E0120 01:52:39.836519 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:39.841646 kubelet[3123]: W0120 01:52:39.836559 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:39.841646 kubelet[3123]: E0120 01:52:39.836594 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:39.880320 kubelet[3123]: E0120 01:52:39.842457 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:39.880320 kubelet[3123]: W0120 01:52:39.842487 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:39.880320 kubelet[3123]: E0120 01:52:39.842513 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:39.880320 kubelet[3123]: E0120 01:52:39.843122 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:39.880320 kubelet[3123]: W0120 01:52:39.843147 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:39.880320 kubelet[3123]: E0120 01:52:39.843172 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:39.880320 kubelet[3123]: E0120 01:52:39.857132 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:39.880320 kubelet[3123]: W0120 01:52:39.857162 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:39.880320 kubelet[3123]: E0120 01:52:39.857188 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:39.880320 kubelet[3123]: E0120 01:52:39.857621 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:39.911667 kubelet[3123]: W0120 01:52:39.857640 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:39.911667 kubelet[3123]: E0120 01:52:39.857661 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:39.911667 kubelet[3123]: E0120 01:52:39.859522 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:52:40.359640 kubelet[3123]: E0120 01:52:40.359590 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:40.395955 kubelet[3123]: E0120 01:52:40.389848 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:40.396498 kubelet[3123]: W0120 01:52:40.396449 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:40.396938 kubelet[3123]: E0120 01:52:40.396904 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:40.398361 kubelet[3123]: E0120 01:52:40.398337 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:40.405323 kubelet[3123]: W0120 01:52:40.405257 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:40.405602 kubelet[3123]: E0120 01:52:40.405570 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:40.410464 kubelet[3123]: E0120 01:52:40.410229 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:40.410464 kubelet[3123]: W0120 01:52:40.410264 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:40.410464 kubelet[3123]: E0120 01:52:40.410293 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:40.422904 kubelet[3123]: E0120 01:52:40.422846 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:40.423317 kubelet[3123]: W0120 01:52:40.423283 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:40.424015 kubelet[3123]: E0120 01:52:40.423757 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:40.440388 kubelet[3123]: E0120 01:52:40.440004 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:40.440388 kubelet[3123]: W0120 01:52:40.440053 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:40.440388 kubelet[3123]: E0120 01:52:40.440095 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:40.441500 kubelet[3123]: E0120 01:52:40.441470 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:40.441623 kubelet[3123]: W0120 01:52:40.441602 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:40.443177 kubelet[3123]: E0120 01:52:40.443148 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:40.448301 kubelet[3123]: E0120 01:52:40.448258 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:40.448512 kubelet[3123]: W0120 01:52:40.448484 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:40.449981 kubelet[3123]: E0120 01:52:40.449942 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:40.454473 kubelet[3123]: E0120 01:52:40.454429 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:40.455221 kubelet[3123]: W0120 01:52:40.455030 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:40.455221 kubelet[3123]: E0120 01:52:40.455078 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:40.464435 kubelet[3123]: E0120 01:52:40.464380 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:40.465384 kubelet[3123]: W0120 01:52:40.464642 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:40.465384 kubelet[3123]: E0120 01:52:40.465222 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:40.468126 kubelet[3123]: E0120 01:52:40.467436 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:40.468126 kubelet[3123]: W0120 01:52:40.467462 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:40.468126 kubelet[3123]: E0120 01:52:40.467489 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:40.508205 kubelet[3123]: E0120 01:52:40.507361 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:40.508205 kubelet[3123]: W0120 01:52:40.508011 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:40.508205 kubelet[3123]: E0120 01:52:40.508064 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:40.521459 kubelet[3123]: E0120 01:52:40.519472 3123 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:52:40.523483 kubelet[3123]: E0120 01:52:40.523107 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:40.523483 kubelet[3123]: W0120 01:52:40.523219 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:40.523483 kubelet[3123]: E0120 01:52:40.523257 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:40.529554 kubelet[3123]: E0120 01:52:40.524240 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:40.529554 kubelet[3123]: W0120 01:52:40.524280 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:40.529554 kubelet[3123]: E0120 01:52:40.524318 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:40.534365 kubelet[3123]: E0120 01:52:40.530139 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:40.534365 kubelet[3123]: W0120 01:52:40.530466 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:40.534365 kubelet[3123]: E0120 01:52:40.530500 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:40.534365 kubelet[3123]: E0120 01:52:40.532209 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:40.534365 kubelet[3123]: W0120 01:52:40.532232 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:40.534365 kubelet[3123]: E0120 01:52:40.532258 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:40.545062 kubelet[3123]: E0120 01:52:40.543868 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:40.545062 kubelet[3123]: W0120 01:52:40.543920 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:40.545062 kubelet[3123]: E0120 01:52:40.543958 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:40.557890 kubelet[3123]: E0120 01:52:40.556008 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:40.557890 kubelet[3123]: W0120 01:52:40.556056 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:40.557890 kubelet[3123]: E0120 01:52:40.556093 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:40.570029 kubelet[3123]: E0120 01:52:40.568798 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:40.570029 kubelet[3123]: W0120 01:52:40.568891 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:40.570029 kubelet[3123]: E0120 01:52:40.568928 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:40.586125 kubelet[3123]: E0120 01:52:40.577343 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:40.586125 kubelet[3123]: W0120 01:52:40.577463 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:40.586125 kubelet[3123]: E0120 01:52:40.577505 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:40.622447 containerd[1643]: time="2026-01-20T01:52:40.622107410Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:52:40.628267 kubelet[3123]: E0120 01:52:40.625223 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:40.628267 kubelet[3123]: W0120 01:52:40.625260 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:40.628267 kubelet[3123]: E0120 01:52:40.625294 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:40.630184 containerd[1643]: time="2026-01-20T01:52:40.630062850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4442579" Jan 20 01:52:40.638932 kubelet[3123]: E0120 01:52:40.635260 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:40.638932 kubelet[3123]: W0120 01:52:40.635335 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:40.638932 kubelet[3123]: E0120 01:52:40.635373 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:40.646881 containerd[1643]: time="2026-01-20T01:52:40.646561939Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:52:40.647476 kubelet[3123]: E0120 01:52:40.647442 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:40.647598 kubelet[3123]: W0120 01:52:40.647579 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:40.649236 kubelet[3123]: E0120 01:52:40.649197 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:40.650010 kubelet[3123]: E0120 01:52:40.649985 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:40.650112 kubelet[3123]: W0120 01:52:40.650091 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:40.650209 kubelet[3123]: E0120 01:52:40.650185 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:40.650659 kubelet[3123]: E0120 01:52:40.650636 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:40.661781 kubelet[3123]: W0120 01:52:40.656535 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:40.661781 kubelet[3123]: E0120 01:52:40.656597 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:40.682338 kubelet[3123]: E0120 01:52:40.674316 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:40.682338 kubelet[3123]: W0120 01:52:40.674367 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:40.682338 kubelet[3123]: E0120 01:52:40.674403 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:40.685373 containerd[1643]: time="2026-01-20T01:52:40.685315703Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:52:40.692655 kubelet[3123]: E0120 01:52:40.692505 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:40.692655 kubelet[3123]: W0120 01:52:40.692597 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:40.692655 kubelet[3123]: E0120 01:52:40.692638 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:40.710535 containerd[1643]: time="2026-01-20T01:52:40.704647252Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 4.250725764s" Jan 20 01:52:40.712458 containerd[1643]: time="2026-01-20T01:52:40.710854280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 20 01:52:40.746959 kubelet[3123]: E0120 01:52:40.744964 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:40.746959 kubelet[3123]: W0120 01:52:40.745056 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:40.746959 kubelet[3123]: E0120 01:52:40.745092 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:40.746959 kubelet[3123]: E0120 01:52:40.745770 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:40.746959 kubelet[3123]: W0120 01:52:40.745807 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:40.746959 kubelet[3123]: E0120 01:52:40.745899 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:40.786112 kubelet[3123]: E0120 01:52:40.785952 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:40.786112 kubelet[3123]: W0120 01:52:40.786018 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:40.786112 kubelet[3123]: E0120 01:52:40.786058 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:40.799521 kubelet[3123]: E0120 01:52:40.795515 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:40.799521 kubelet[3123]: W0120 01:52:40.795558 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:40.799521 kubelet[3123]: E0120 01:52:40.795594 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:40.898079 kubelet[3123]: E0120 01:52:40.812018 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:40.898079 kubelet[3123]: W0120 01:52:40.812054 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:40.898079 kubelet[3123]: E0120 01:52:40.812084 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:40.898920 containerd[1643]: time="2026-01-20T01:52:40.848144620Z" level=info msg="CreateContainer within sandbox \"90ef88d76c0030b947585dd9d3d8ec7e21848368d8abdfa9e4690c0429bf7b2a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 20 01:52:40.906772 kubelet[3123]: E0120 01:52:40.905037 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:40.913776 kubelet[3123]: W0120 01:52:40.912360 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:40.922423 kubelet[3123]: E0120 01:52:40.922379 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:40.955187 kubelet[3123]: E0120 01:52:40.955132 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:52:40.959770 kubelet[3123]: W0120 01:52:40.955824 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:52:40.963977 kubelet[3123]: E0120 01:52:40.960813 3123 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:52:41.025406 containerd[1643]: time="2026-01-20T01:52:41.023088640Z" level=info msg="Container df23b759f5b4da89e96c316d85404c732266ab47580a907a0212147d6a557de5: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:52:41.062907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3925845555.mount: Deactivated successfully. Jan 20 01:52:41.198239 containerd[1643]: time="2026-01-20T01:52:41.197957923Z" level=info msg="CreateContainer within sandbox \"90ef88d76c0030b947585dd9d3d8ec7e21848368d8abdfa9e4690c0429bf7b2a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"df23b759f5b4da89e96c316d85404c732266ab47580a907a0212147d6a557de5\"" Jan 20 01:52:41.209171 containerd[1643]: time="2026-01-20T01:52:41.203137046Z" level=info msg="StartContainer for \"df23b759f5b4da89e96c316d85404c732266ab47580a907a0212147d6a557de5\"" Jan 20 01:52:41.217811 containerd[1643]: time="2026-01-20T01:52:41.217379888Z" level=info msg="connecting to shim df23b759f5b4da89e96c316d85404c732266ab47580a907a0212147d6a557de5" address="unix:///run/containerd/s/9dd5e4039d6f5b3712b1257a02176431d1406dc965366ea78fa428141f709a9a" protocol=ttrpc version=3 Jan 20 01:52:41.690279 systemd[1]: Started cri-containerd-df23b759f5b4da89e96c316d85404c732266ab47580a907a0212147d6a557de5.scope - libcontainer container df23b759f5b4da89e96c316d85404c732266ab47580a907a0212147d6a557de5. Jan 20 01:52:41.920376 kubelet[3123]: E0120 01:52:41.920319 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:52:42.357266 kernel: kauditd_printk_skb: 14 callbacks suppressed Jan 20 01:52:42.358655 kernel: audit: type=1334 audit(1768873962.338:577): prog-id=164 op=LOAD Jan 20 01:52:42.338000 audit: BPF prog-id=164 op=LOAD Jan 20 01:52:42.494158 kernel: audit: type=1300 audit(1768873962.338:577): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000148488 a2=98 a3=0 items=0 ppid=3731 pid=4005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:42.338000 audit[4005]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000148488 a2=98 a3=0 items=0 ppid=3731 pid=4005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:42.514897 kernel: audit: type=1327 audit(1768873962.338:577): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466323362373539663562346461383965393663333136643835343034 Jan 20 01:52:42.338000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466323362373539663562346461383965393663333136643835343034 Jan 20 01:52:42.563149 kernel: audit: type=1334 audit(1768873962.338:578): prog-id=165 op=LOAD Jan 20 01:52:42.563311 kernel: audit: type=1300 audit(1768873962.338:578): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000148218 a2=98 a3=0 items=0 ppid=3731 pid=4005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:42.338000 audit: BPF prog-id=165 op=LOAD Jan 20 01:52:42.338000 audit[4005]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000148218 a2=98 a3=0 items=0 ppid=3731 pid=4005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:42.674588 kernel: audit: type=1327 audit(1768873962.338:578): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466323362373539663562346461383965393663333136643835343034 Jan 20 01:52:42.338000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466323362373539663562346461383965393663333136643835343034 Jan 20 01:52:42.717328 kernel: audit: type=1334 audit(1768873962.338:579): prog-id=165 op=UNLOAD Jan 20 01:52:42.338000 audit: BPF prog-id=165 op=UNLOAD Jan 20 01:52:42.338000 audit[4005]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3731 pid=4005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:42.750149 kernel: audit: type=1300 audit(1768873962.338:579): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3731 pid=4005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:42.750293 kernel: audit: type=1327 audit(1768873962.338:579): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466323362373539663562346461383965393663333136643835343034 Jan 20 01:52:42.338000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466323362373539663562346461383965393663333136643835343034 Jan 20 01:52:42.338000 audit: BPF prog-id=164 op=UNLOAD Jan 20 01:52:42.821317 kernel: audit: type=1334 audit(1768873962.338:580): prog-id=164 op=UNLOAD Jan 20 01:52:42.338000 audit[4005]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3731 pid=4005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:42.338000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466323362373539663562346461383965393663333136643835343034 Jan 20 01:52:42.338000 audit: BPF prog-id=166 op=LOAD Jan 20 01:52:42.338000 audit[4005]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001486e8 a2=98 a3=0 items=0 ppid=3731 pid=4005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:42.338000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466323362373539663562346461383965393663333136643835343034 Jan 20 01:52:43.013910 containerd[1643]: time="2026-01-20T01:52:43.013550553Z" level=info msg="StartContainer for \"df23b759f5b4da89e96c316d85404c732266ab47580a907a0212147d6a557de5\" returns successfully" Jan 20 01:52:43.019933 systemd[1]: cri-containerd-df23b759f5b4da89e96c316d85404c732266ab47580a907a0212147d6a557de5.scope: Deactivated successfully. Jan 20 01:52:43.046000 audit: BPF prog-id=166 op=UNLOAD Jan 20 01:52:43.051652 containerd[1643]: time="2026-01-20T01:52:43.051440194Z" level=info msg="received container exit event container_id:\"df23b759f5b4da89e96c316d85404c732266ab47580a907a0212147d6a557de5\" id:\"df23b759f5b4da89e96c316d85404c732266ab47580a907a0212147d6a557de5\" pid:4019 exited_at:{seconds:1768873963 nanos:41650409}" Jan 20 01:52:43.244908 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df23b759f5b4da89e96c316d85404c732266ab47580a907a0212147d6a557de5-rootfs.mount: Deactivated successfully. Jan 20 01:52:43.566624 kubelet[3123]: E0120 01:52:43.564943 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:43.571766 containerd[1643]: time="2026-01-20T01:52:43.566363752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 20 01:52:43.779024 kubelet[3123]: E0120 01:52:43.772262 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:52:45.003900 kubelet[3123]: E0120 01:52:45.002479 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:52:45.536599 kubelet[3123]: E0120 01:52:45.536543 3123 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:52:46.773054 kubelet[3123]: E0120 01:52:46.767224 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:53:05.523175 kernel: sched: DL replenish lagged too much Jan 20 01:53:07.664606 systemd[1]: cri-containerd-63a141b94f3536e20f0d4e8a1fa3bc37fd34cb9ecb2cbfaedbb664f4f2d7d907.scope: Deactivated successfully. Jan 20 01:53:07.762967 systemd[1]: cri-containerd-63a141b94f3536e20f0d4e8a1fa3bc37fd34cb9ecb2cbfaedbb664f4f2d7d907.scope: Consumed 13.937s CPU time, 56.4M memory peak, 484K read from disk. Jan 20 01:53:07.792000 audit: BPF prog-id=106 op=UNLOAD Jan 20 01:53:07.865192 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 20 01:53:07.865380 kernel: audit: type=1334 audit(1768873987.792:583): prog-id=106 op=UNLOAD Jan 20 01:53:07.792000 audit: BPF prog-id=110 op=UNLOAD Jan 20 01:53:07.961314 kernel: audit: type=1334 audit(1768873987.792:584): prog-id=110 op=UNLOAD Jan 20 01:53:07.928501 systemd[1]: cri-containerd-5f3e905fa78b0f7594c614914c72be4ee64f38b1c5e2cd99718fbf94428e1c43.scope: Deactivated successfully. Jan 20 01:53:07.929245 systemd[1]: cri-containerd-5f3e905fa78b0f7594c614914c72be4ee64f38b1c5e2cd99718fbf94428e1c43.scope: Consumed 12.125s CPU time, 22.6M memory peak, 128K read from disk. Jan 20 01:53:08.028423 kernel: audit: type=1334 audit(1768873987.812:585): prog-id=167 op=LOAD Jan 20 01:53:07.812000 audit: BPF prog-id=167 op=LOAD Jan 20 01:53:08.156863 kernel: audit: type=1334 audit(1768873987.812:586): prog-id=91 op=UNLOAD Jan 20 01:53:08.157026 kernel: audit: type=1334 audit(1768873987.937:587): prog-id=168 op=LOAD Jan 20 01:53:08.161977 kernel: audit: type=1334 audit(1768873987.937:588): prog-id=81 op=UNLOAD Jan 20 01:53:07.812000 audit: BPF prog-id=91 op=UNLOAD Jan 20 01:53:07.937000 audit: BPF prog-id=168 op=LOAD Jan 20 01:53:07.937000 audit: BPF prog-id=81 op=UNLOAD Jan 20 01:53:08.164129 containerd[1643]: time="2026-01-20T01:53:08.131117919Z" level=info msg="received container exit event container_id:\"5f3e905fa78b0f7594c614914c72be4ee64f38b1c5e2cd99718fbf94428e1c43\" id:\"5f3e905fa78b0f7594c614914c72be4ee64f38b1c5e2cd99718fbf94428e1c43\" pid:2951 exit_status:1 exited_at:{seconds:1768873988 nanos:121995273}" Jan 20 01:53:08.029000 audit: BPF prog-id=96 op=UNLOAD Jan 20 01:53:08.212396 kernel: audit: type=1334 audit(1768873988.029:589): prog-id=96 op=UNLOAD Jan 20 01:53:08.029000 audit: BPF prog-id=105 op=UNLOAD Jan 20 01:53:08.244252 kernel: audit: type=1334 audit(1768873988.029:590): prog-id=105 op=UNLOAD Jan 20 01:53:08.244394 kubelet[3123]: E0120 01:53:08.235979 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:53:08.297788 containerd[1643]: time="2026-01-20T01:53:08.297537227Z" level=info msg="received container exit event container_id:\"63a141b94f3536e20f0d4e8a1fa3bc37fd34cb9ecb2cbfaedbb664f4f2d7d907\" id:\"63a141b94f3536e20f0d4e8a1fa3bc37fd34cb9ecb2cbfaedbb664f4f2d7d907\" pid:2968 exit_status:1 exited_at:{seconds:1768873988 nanos:130426866}" Jan 20 01:53:08.307939 kubelet[3123]: E0120 01:53:08.307885 3123 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:53:09.061646 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f3e905fa78b0f7594c614914c72be4ee64f38b1c5e2cd99718fbf94428e1c43-rootfs.mount: Deactivated successfully. Jan 20 01:53:09.174454 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63a141b94f3536e20f0d4e8a1fa3bc37fd34cb9ecb2cbfaedbb664f4f2d7d907-rootfs.mount: Deactivated successfully. Jan 20 01:53:09.339818 kubelet[3123]: E0120 01:53:09.335118 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:09.517016 kubelet[3123]: I0120 01:53:09.516768 3123 scope.go:117] "RemoveContainer" containerID="5f3e905fa78b0f7594c614914c72be4ee64f38b1c5e2cd99718fbf94428e1c43" Jan 20 01:53:09.517016 kubelet[3123]: E0120 01:53:09.516935 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:09.558777 containerd[1643]: time="2026-01-20T01:53:09.557924780Z" level=info msg="CreateContainer within sandbox \"031289bb32c5540cad52ff70b85767c3a2b55e3d17aad2550b4d749f716f9e7a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 20 01:53:09.572365 kubelet[3123]: I0120 01:53:09.571018 3123 scope.go:117] "RemoveContainer" containerID="63a141b94f3536e20f0d4e8a1fa3bc37fd34cb9ecb2cbfaedbb664f4f2d7d907" Jan 20 01:53:09.600908 kubelet[3123]: E0120 01:53:09.600598 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:09.640792 containerd[1643]: time="2026-01-20T01:53:09.640331098Z" level=info msg="CreateContainer within sandbox \"c885f762d91cfb67fe429fbaedcc24674553182e035ca4156ebc0a71391bd6f3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 20 01:53:09.857034 containerd[1643]: time="2026-01-20T01:53:09.855201308Z" level=info msg="Container 96ccc7b03b4d1a2fd6e804dc4157249879c5b166f5bf08a2c76e8fcc18810fd2: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:53:09.872828 kubelet[3123]: E0120 01:53:09.859135 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:53:10.041605 containerd[1643]: time="2026-01-20T01:53:10.041452560Z" level=info msg="CreateContainer within sandbox \"031289bb32c5540cad52ff70b85767c3a2b55e3d17aad2550b4d749f716f9e7a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"96ccc7b03b4d1a2fd6e804dc4157249879c5b166f5bf08a2c76e8fcc18810fd2\"" Jan 20 01:53:10.055772 containerd[1643]: time="2026-01-20T01:53:10.055332991Z" level=info msg="StartContainer for \"96ccc7b03b4d1a2fd6e804dc4157249879c5b166f5bf08a2c76e8fcc18810fd2\"" Jan 20 01:53:10.140886 containerd[1643]: time="2026-01-20T01:53:10.139801913Z" level=info msg="connecting to shim 96ccc7b03b4d1a2fd6e804dc4157249879c5b166f5bf08a2c76e8fcc18810fd2" address="unix:///run/containerd/s/adbcef3ee3c5a6ebf35e7d50672629f7bc553c77f0fef6dfe0dc49de4b8e5235" protocol=ttrpc version=3 Jan 20 01:53:10.161228 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3957878545.mount: Deactivated successfully. Jan 20 01:53:10.283557 containerd[1643]: time="2026-01-20T01:53:10.268533047Z" level=info msg="Container 3c4bffca7804a75f735671ce788b4a16bfb875d17bff5a43848044787c70f531: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:53:10.409588 kubelet[3123]: E0120 01:53:10.409182 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:10.611061 containerd[1643]: time="2026-01-20T01:53:10.603017649Z" level=info msg="CreateContainer within sandbox \"c885f762d91cfb67fe429fbaedcc24674553182e035ca4156ebc0a71391bd6f3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"3c4bffca7804a75f735671ce788b4a16bfb875d17bff5a43848044787c70f531\"" Jan 20 01:53:10.620596 containerd[1643]: time="2026-01-20T01:53:10.613928385Z" level=info msg="StartContainer for \"3c4bffca7804a75f735671ce788b4a16bfb875d17bff5a43848044787c70f531\"" Jan 20 01:53:10.708050 containerd[1643]: time="2026-01-20T01:53:10.637809770Z" level=info msg="connecting to shim 3c4bffca7804a75f735671ce788b4a16bfb875d17bff5a43848044787c70f531" address="unix:///run/containerd/s/4aea575429a2cd7e5f9f767a35b5e4a00b86ff3e051b43713b2194618990aec5" protocol=ttrpc version=3 Jan 20 01:53:10.643458 systemd[1]: Started cri-containerd-96ccc7b03b4d1a2fd6e804dc4157249879c5b166f5bf08a2c76e8fcc18810fd2.scope - libcontainer container 96ccc7b03b4d1a2fd6e804dc4157249879c5b166f5bf08a2c76e8fcc18810fd2. Jan 20 01:53:11.336111 systemd[1]: Started cri-containerd-3c4bffca7804a75f735671ce788b4a16bfb875d17bff5a43848044787c70f531.scope - libcontainer container 3c4bffca7804a75f735671ce788b4a16bfb875d17bff5a43848044787c70f531. Jan 20 01:53:11.533000 audit: BPF prog-id=169 op=LOAD Jan 20 01:53:11.566556 kernel: audit: type=1334 audit(1768873991.533:591): prog-id=169 op=LOAD Jan 20 01:53:11.571000 audit: BPF prog-id=170 op=LOAD Jan 20 01:53:11.663783 kernel: audit: type=1334 audit(1768873991.571:592): prog-id=170 op=LOAD Jan 20 01:53:11.571000 audit[4089]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2787 pid=4089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:11.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936636363376230336234643161326664366538303464633431353732 Jan 20 01:53:11.571000 audit: BPF prog-id=170 op=UNLOAD Jan 20 01:53:11.571000 audit[4089]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2787 pid=4089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:11.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936636363376230336234643161326664366538303464633431353732 Jan 20 01:53:11.571000 audit: BPF prog-id=171 op=LOAD Jan 20 01:53:11.571000 audit[4089]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2787 pid=4089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:11.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936636363376230336234643161326664366538303464633431353732 Jan 20 01:53:11.571000 audit: BPF prog-id=172 op=LOAD Jan 20 01:53:11.571000 audit[4089]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2787 pid=4089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:11.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936636363376230336234643161326664366538303464633431353732 Jan 20 01:53:11.571000 audit: BPF prog-id=172 op=UNLOAD Jan 20 01:53:11.571000 audit[4089]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2787 pid=4089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:11.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936636363376230336234643161326664366538303464633431353732 Jan 20 01:53:11.571000 audit: BPF prog-id=171 op=UNLOAD Jan 20 01:53:11.571000 audit[4089]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2787 pid=4089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:11.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936636363376230336234643161326664366538303464633431353732 Jan 20 01:53:11.571000 audit: BPF prog-id=173 op=LOAD Jan 20 01:53:11.571000 audit[4089]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2787 pid=4089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:11.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936636363376230336234643161326664366538303464633431353732 Jan 20 01:53:11.820290 kubelet[3123]: E0120 01:53:11.803261 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:53:12.056000 audit: BPF prog-id=174 op=LOAD Jan 20 01:53:12.077000 audit: BPF prog-id=175 op=LOAD Jan 20 01:53:12.077000 audit[4099]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2800 pid=4099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:12.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363346266666361373830346137356637333536373163653738386234 Jan 20 01:53:12.077000 audit: BPF prog-id=175 op=UNLOAD Jan 20 01:53:12.077000 audit[4099]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2800 pid=4099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:12.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363346266666361373830346137356637333536373163653738386234 Jan 20 01:53:12.099000 audit: BPF prog-id=176 op=LOAD Jan 20 01:53:12.099000 audit[4099]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2800 pid=4099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:12.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363346266666361373830346137356637333536373163653738386234 Jan 20 01:53:12.099000 audit: BPF prog-id=177 op=LOAD Jan 20 01:53:12.099000 audit[4099]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2800 pid=4099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:12.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363346266666361373830346137356637333536373163653738386234 Jan 20 01:53:12.109000 audit: BPF prog-id=177 op=UNLOAD Jan 20 01:53:12.109000 audit[4099]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2800 pid=4099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:12.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363346266666361373830346137356637333536373163653738386234 Jan 20 01:53:12.109000 audit: BPF prog-id=176 op=UNLOAD Jan 20 01:53:12.109000 audit[4099]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2800 pid=4099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:12.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363346266666361373830346137356637333536373163653738386234 Jan 20 01:53:12.109000 audit: BPF prog-id=178 op=LOAD Jan 20 01:53:12.109000 audit[4099]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2800 pid=4099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:12.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363346266666361373830346137356637333536373163653738386234 Jan 20 01:53:13.183066 containerd[1643]: time="2026-01-20T01:53:13.183002368Z" level=info msg="StartContainer for \"96ccc7b03b4d1a2fd6e804dc4157249879c5b166f5bf08a2c76e8fcc18810fd2\" returns successfully" Jan 20 01:53:13.319644 containerd[1643]: time="2026-01-20T01:53:13.319574279Z" level=info msg="StartContainer for \"3c4bffca7804a75f735671ce788b4a16bfb875d17bff5a43848044787c70f531\" returns successfully" Jan 20 01:53:13.354958 kubelet[3123]: E0120 01:53:13.354445 3123 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:53:13.778782 kubelet[3123]: E0120 01:53:13.772506 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:53:14.285089 kubelet[3123]: E0120 01:53:14.281989 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:14.495857 kubelet[3123]: E0120 01:53:14.493150 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:15.530183 kubelet[3123]: E0120 01:53:15.524649 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:15.537996 kubelet[3123]: E0120 01:53:15.537325 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:15.793204 kubelet[3123]: E0120 01:53:15.788385 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:53:17.767560 kubelet[3123]: E0120 01:53:17.767029 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:53:18.378218 kubelet[3123]: E0120 01:53:18.374761 3123 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:53:19.767812 kubelet[3123]: E0120 01:53:19.767057 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:53:20.618071 kubelet[3123]: E0120 01:53:20.617254 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:21.109881 containerd[1643]: time="2026-01-20T01:53:21.109333067Z" level=info msg="container event discarded" container=031289bb32c5540cad52ff70b85767c3a2b55e3d17aad2550b4d749f716f9e7a type=CONTAINER_CREATED_EVENT Jan 20 01:53:21.125398 containerd[1643]: time="2026-01-20T01:53:21.124553013Z" level=info msg="container event discarded" container=031289bb32c5540cad52ff70b85767c3a2b55e3d17aad2550b4d749f716f9e7a type=CONTAINER_STARTED_EVENT Jan 20 01:53:21.181419 containerd[1643]: time="2026-01-20T01:53:21.171211400Z" level=info msg="container event discarded" container=27db5926942211e02da172dd2d138e4f623b3bbb1024c8e4d172c2b1c0719a12 type=CONTAINER_CREATED_EVENT Jan 20 01:53:21.181419 containerd[1643]: time="2026-01-20T01:53:21.171380491Z" level=info msg="container event discarded" container=27db5926942211e02da172dd2d138e4f623b3bbb1024c8e4d172c2b1c0719a12 type=CONTAINER_STARTED_EVENT Jan 20 01:53:21.261401 containerd[1643]: time="2026-01-20T01:53:21.261325383Z" level=info msg="container event discarded" container=c885f762d91cfb67fe429fbaedcc24674553182e035ca4156ebc0a71391bd6f3 type=CONTAINER_CREATED_EVENT Jan 20 01:53:21.261401 containerd[1643]: time="2026-01-20T01:53:21.261384959Z" level=info msg="container event discarded" container=c885f762d91cfb67fe429fbaedcc24674553182e035ca4156ebc0a71391bd6f3 type=CONTAINER_STARTED_EVENT Jan 20 01:53:21.515190 containerd[1643]: time="2026-01-20T01:53:21.499434720Z" level=info msg="container event discarded" container=5f3e905fa78b0f7594c614914c72be4ee64f38b1c5e2cd99718fbf94428e1c43 type=CONTAINER_CREATED_EVENT Jan 20 01:53:21.538446 containerd[1643]: time="2026-01-20T01:53:21.537860910Z" level=info msg="container event discarded" container=63a141b94f3536e20f0d4e8a1fa3bc37fd34cb9ecb2cbfaedbb664f4f2d7d907 type=CONTAINER_CREATED_EVENT Jan 20 01:53:21.538446 containerd[1643]: time="2026-01-20T01:53:21.537919474Z" level=info msg="container event discarded" container=c70168fc55c1b265f2f30a01dd07fd0e5f7160685bbb475b77bf356704e226ad type=CONTAINER_CREATED_EVENT Jan 20 01:53:21.806655 kubelet[3123]: E0120 01:53:21.769866 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:53:22.845095 containerd[1643]: time="2026-01-20T01:53:22.844895748Z" level=info msg="container event discarded" container=c70168fc55c1b265f2f30a01dd07fd0e5f7160685bbb475b77bf356704e226ad type=CONTAINER_STARTED_EVENT Jan 20 01:53:22.942973 containerd[1643]: time="2026-01-20T01:53:22.942790449Z" level=info msg="container event discarded" container=63a141b94f3536e20f0d4e8a1fa3bc37fd34cb9ecb2cbfaedbb664f4f2d7d907 type=CONTAINER_STARTED_EVENT Jan 20 01:53:22.969010 containerd[1643]: time="2026-01-20T01:53:22.964874199Z" level=info msg="container event discarded" container=5f3e905fa78b0f7594c614914c72be4ee64f38b1c5e2cd99718fbf94428e1c43 type=CONTAINER_STARTED_EVENT Jan 20 01:53:23.308094 kubelet[3123]: E0120 01:53:23.307933 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:23.390111 kubelet[3123]: E0120 01:53:23.390000 3123 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:53:23.793402 kubelet[3123]: E0120 01:53:23.774289 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:53:23.839189 kubelet[3123]: E0120 01:53:23.838140 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:24.871469 kubelet[3123]: E0120 01:53:24.857372 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:25.770089 kubelet[3123]: E0120 01:53:25.768462 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:53:26.842421 kubelet[3123]: E0120 01:53:26.836376 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:27.975257 kubelet[3123]: E0120 01:53:27.974621 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:53:28.433756 kubelet[3123]: E0120 01:53:28.433501 3123 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:53:28.858098 kubelet[3123]: E0120 01:53:28.856208 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:29.773513 kubelet[3123]: E0120 01:53:29.772604 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:53:30.652316 kubelet[3123]: E0120 01:53:30.650768 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:31.770103 kubelet[3123]: E0120 01:53:31.768152 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:53:33.455087 kubelet[3123]: E0120 01:53:33.454934 3123 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:53:33.777838 kubelet[3123]: E0120 01:53:33.772823 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:53:34.276313 containerd[1643]: time="2026-01-20T01:53:34.274254067Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:53:34.285502 containerd[1643]: time="2026-01-20T01:53:34.281591986Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442323" Jan 20 01:53:34.298804 containerd[1643]: time="2026-01-20T01:53:34.297207230Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:53:34.313888 containerd[1643]: time="2026-01-20T01:53:34.313828271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:53:34.326210 containerd[1643]: time="2026-01-20T01:53:34.324220924Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 50.757754812s" Jan 20 01:53:34.326210 containerd[1643]: time="2026-01-20T01:53:34.324275957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 20 01:53:34.387815 containerd[1643]: time="2026-01-20T01:53:34.379830943Z" level=info msg="CreateContainer within sandbox \"90ef88d76c0030b947585dd9d3d8ec7e21848368d8abdfa9e4690c0429bf7b2a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 01:53:34.467103 containerd[1643]: time="2026-01-20T01:53:34.466022304Z" level=info msg="Container 01d5f8908880a1aeb7f16f3cfd3a358a6def4a9cf94a38cc7994297050d81a76: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:53:34.522173 containerd[1643]: time="2026-01-20T01:53:34.521593817Z" level=info msg="CreateContainer within sandbox \"90ef88d76c0030b947585dd9d3d8ec7e21848368d8abdfa9e4690c0429bf7b2a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"01d5f8908880a1aeb7f16f3cfd3a358a6def4a9cf94a38cc7994297050d81a76\"" Jan 20 01:53:34.526076 containerd[1643]: time="2026-01-20T01:53:34.526005691Z" level=info msg="StartContainer for \"01d5f8908880a1aeb7f16f3cfd3a358a6def4a9cf94a38cc7994297050d81a76\"" Jan 20 01:53:34.540776 containerd[1643]: time="2026-01-20T01:53:34.539608459Z" level=info msg="connecting to shim 01d5f8908880a1aeb7f16f3cfd3a358a6def4a9cf94a38cc7994297050d81a76" address="unix:///run/containerd/s/9dd5e4039d6f5b3712b1257a02176431d1406dc965366ea78fa428141f709a9a" protocol=ttrpc version=3 Jan 20 01:53:34.806121 systemd[1]: Started cri-containerd-01d5f8908880a1aeb7f16f3cfd3a358a6def4a9cf94a38cc7994297050d81a76.scope - libcontainer container 01d5f8908880a1aeb7f16f3cfd3a358a6def4a9cf94a38cc7994297050d81a76. Jan 20 01:53:35.167867 kernel: kauditd_printk_skb: 42 callbacks suppressed Jan 20 01:53:35.168167 kernel: audit: type=1334 audit(1768874015.149:607): prog-id=179 op=LOAD Jan 20 01:53:35.149000 audit: BPF prog-id=179 op=LOAD Jan 20 01:53:35.149000 audit[4166]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3731 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:35.222665 kernel: audit: type=1300 audit(1768874015.149:607): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3731 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:35.222894 kernel: audit: type=1327 audit(1768874015.149:607): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031643566383930383838306131616562376631366633636664336133 Jan 20 01:53:35.149000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031643566383930383838306131616562376631366633636664336133 Jan 20 01:53:35.257431 kernel: audit: type=1334 audit(1768874015.149:608): prog-id=180 op=LOAD Jan 20 01:53:35.149000 audit: BPF prog-id=180 op=LOAD Jan 20 01:53:35.149000 audit[4166]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3731 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:35.356569 kernel: audit: type=1300 audit(1768874015.149:608): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3731 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:35.356798 kernel: audit: type=1327 audit(1768874015.149:608): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031643566383930383838306131616562376631366633636664336133 Jan 20 01:53:35.149000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031643566383930383838306131616562376631366633636664336133 Jan 20 01:53:35.149000 audit: BPF prog-id=180 op=UNLOAD Jan 20 01:53:35.149000 audit[4166]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3731 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:35.429379 kernel: audit: type=1334 audit(1768874015.149:609): prog-id=180 op=UNLOAD Jan 20 01:53:35.429644 kernel: audit: type=1300 audit(1768874015.149:609): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3731 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:35.149000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031643566383930383838306131616562376631366633636664336133 Jan 20 01:53:35.527856 kernel: audit: type=1327 audit(1768874015.149:609): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031643566383930383838306131616562376631366633636664336133 Jan 20 01:53:35.528375 kernel: audit: type=1334 audit(1768874015.149:610): prog-id=179 op=UNLOAD Jan 20 01:53:35.149000 audit: BPF prog-id=179 op=UNLOAD Jan 20 01:53:35.149000 audit[4166]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3731 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:35.149000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031643566383930383838306131616562376631366633636664336133 Jan 20 01:53:35.149000 audit: BPF prog-id=181 op=LOAD Jan 20 01:53:35.149000 audit[4166]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3731 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:35.149000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031643566383930383838306131616562376631366633636664336133 Jan 20 01:53:35.571799 containerd[1643]: time="2026-01-20T01:53:35.570801742Z" level=info msg="StartContainer for \"01d5f8908880a1aeb7f16f3cfd3a358a6def4a9cf94a38cc7994297050d81a76\" returns successfully" Jan 20 01:53:35.773874 kubelet[3123]: E0120 01:53:35.770278 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:53:36.571204 kubelet[3123]: E0120 01:53:36.567815 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:37.845202 kubelet[3123]: E0120 01:53:37.839488 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:53:37.845202 kubelet[3123]: E0120 01:53:37.840341 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:38.467840 kubelet[3123]: E0120 01:53:38.463586 3123 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:53:40.055632 kubelet[3123]: E0120 01:53:40.054895 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:53:41.848037 kubelet[3123]: E0120 01:53:41.847004 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:53:43.786226 kubelet[3123]: E0120 01:53:43.785590 3123 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:53:43.850103 kubelet[3123]: E0120 01:53:43.824778 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:53:44.936000 audit[4206]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=4206 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:53:44.963280 kernel: kauditd_printk_skb: 5 callbacks suppressed Jan 20 01:53:44.963498 kernel: audit: type=1325 audit(1768874024.936:612): table=filter:119 family=2 entries=21 op=nft_register_rule pid=4206 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:53:44.936000 audit[4206]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdbf5e8ff0 a2=0 a3=7ffdbf5e8fdc items=0 ppid=3237 pid=4206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:45.143812 kernel: audit: type=1300 audit(1768874024.936:612): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdbf5e8ff0 a2=0 a3=7ffdbf5e8fdc items=0 ppid=3237 pid=4206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:45.143965 kernel: audit: type=1327 audit(1768874024.936:612): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:53:44.936000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:53:45.196000 audit[4206]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=4206 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:53:45.196000 audit[4206]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffdbf5e8ff0 a2=0 a3=7ffdbf5e8fdc items=0 ppid=3237 pid=4206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:45.363878 kernel: audit: type=1325 audit(1768874025.196:613): table=nat:120 family=2 entries=19 op=nft_register_chain pid=4206 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:53:45.364033 kernel: audit: type=1300 audit(1768874025.196:613): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffdbf5e8ff0 a2=0 a3=7ffdbf5e8fdc items=0 ppid=3237 pid=4206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:45.364102 kernel: audit: type=1327 audit(1768874025.196:613): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:53:45.196000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:53:45.808828 kubelet[3123]: E0120 01:53:45.802622 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:53:47.795447 kubelet[3123]: E0120 01:53:47.795009 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:53:48.833807 kubelet[3123]: E0120 01:53:48.809965 3123 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:53:49.793874 kubelet[3123]: E0120 01:53:49.793804 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:53:51.781818 kubelet[3123]: E0120 01:53:51.771919 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:53:53.787439 kubelet[3123]: E0120 01:53:53.780639 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:53:53.846204 kubelet[3123]: E0120 01:53:53.846142 3123 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:53:55.794937 kubelet[3123]: E0120 01:53:55.783334 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:53:57.781399 kubelet[3123]: E0120 01:53:57.781330 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:53:57.836142 systemd[1]: cri-containerd-01d5f8908880a1aeb7f16f3cfd3a358a6def4a9cf94a38cc7994297050d81a76.scope: Deactivated successfully. Jan 20 01:53:57.859450 containerd[1643]: time="2026-01-20T01:53:57.843072211Z" level=info msg="received container exit event container_id:\"01d5f8908880a1aeb7f16f3cfd3a358a6def4a9cf94a38cc7994297050d81a76\" id:\"01d5f8908880a1aeb7f16f3cfd3a358a6def4a9cf94a38cc7994297050d81a76\" pid:4179 exited_at:{seconds:1768874037 nanos:841461405}" Jan 20 01:53:57.839154 systemd[1]: cri-containerd-01d5f8908880a1aeb7f16f3cfd3a358a6def4a9cf94a38cc7994297050d81a76.scope: Consumed 4.588s CPU time, 176.1M memory peak, 3.7M read from disk, 171.3M written to disk. Jan 20 01:53:57.856000 audit: BPF prog-id=181 op=UNLOAD Jan 20 01:53:57.880074 kernel: audit: type=1334 audit(1768874037.856:614): prog-id=181 op=UNLOAD Jan 20 01:53:58.371657 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01d5f8908880a1aeb7f16f3cfd3a358a6def4a9cf94a38cc7994297050d81a76-rootfs.mount: Deactivated successfully. Jan 20 01:53:58.741370 kubelet[3123]: E0120 01:53:58.740898 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:58.768893 containerd[1643]: time="2026-01-20T01:53:58.765993364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 20 01:53:59.861089 systemd[1]: Created slice kubepods-besteffort-podeeb09d5e_8a63_4fca_910b_ea49fa1ecf05.slice - libcontainer container kubepods-besteffort-podeeb09d5e_8a63_4fca_910b_ea49fa1ecf05.slice. Jan 20 01:53:59.907365 containerd[1643]: time="2026-01-20T01:53:59.907307122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x6f5h,Uid:eeb09d5e-8a63-4fca-910b-ea49fa1ecf05,Namespace:calico-system,Attempt:0,}" Jan 20 01:54:01.708669 kubelet[3123]: I0120 01:54:01.659671 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/73120413-751d-4a6a-a82b-54ccc2e8bc99-whisker-backend-key-pair\") pod \"whisker-6455dcb75d-z7fp4\" (UID: \"73120413-751d-4a6a-a82b-54ccc2e8bc99\") " pod="calico-system/whisker-6455dcb75d-z7fp4" Jan 20 01:54:01.708669 kubelet[3123]: I0120 01:54:01.705088 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73120413-751d-4a6a-a82b-54ccc2e8bc99-whisker-ca-bundle\") pod \"whisker-6455dcb75d-z7fp4\" (UID: \"73120413-751d-4a6a-a82b-54ccc2e8bc99\") " pod="calico-system/whisker-6455dcb75d-z7fp4" Jan 20 01:54:01.708669 kubelet[3123]: I0120 01:54:01.705136 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlmjq\" (UniqueName: \"kubernetes.io/projected/68cbc571-4445-4166-912c-8fdfe252aae2-kube-api-access-hlmjq\") pod \"calico-kube-controllers-86d7bc7b4f-k5t2j\" (UID: \"68cbc571-4445-4166-912c-8fdfe252aae2\") " pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" Jan 20 01:54:01.708669 kubelet[3123]: I0120 01:54:01.705162 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/589f656f-1e0a-4667-bc0d-42908aab3340-calico-apiserver-certs\") pod \"calico-apiserver-74b798b596-wbvft\" (UID: \"589f656f-1e0a-4667-bc0d-42908aab3340\") " pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" Jan 20 01:54:01.708669 kubelet[3123]: I0120 01:54:01.705196 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68cbc571-4445-4166-912c-8fdfe252aae2-tigera-ca-bundle\") pod \"calico-kube-controllers-86d7bc7b4f-k5t2j\" (UID: \"68cbc571-4445-4166-912c-8fdfe252aae2\") " pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" Jan 20 01:54:01.716854 kubelet[3123]: I0120 01:54:01.705225 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgfnb\" (UniqueName: \"kubernetes.io/projected/285811f9-e547-431f-a7b0-90e1226d2f4d-kube-api-access-vgfnb\") pod \"goldmane-666569f655-szpvj\" (UID: \"285811f9-e547-431f-a7b0-90e1226d2f4d\") " pod="calico-system/goldmane-666569f655-szpvj" Jan 20 01:54:01.716854 kubelet[3123]: I0120 01:54:01.705256 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbt2k\" (UniqueName: \"kubernetes.io/projected/589f656f-1e0a-4667-bc0d-42908aab3340-kube-api-access-gbt2k\") pod \"calico-apiserver-74b798b596-wbvft\" (UID: \"589f656f-1e0a-4667-bc0d-42908aab3340\") " pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" Jan 20 01:54:01.716854 kubelet[3123]: I0120 01:54:01.705290 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05750e4a-a6e9-4631-9a1f-786fc076da7e-config-volume\") pod \"coredns-674b8bbfcf-hknvv\" (UID: \"05750e4a-a6e9-4631-9a1f-786fc076da7e\") " pod="kube-system/coredns-674b8bbfcf-hknvv" Jan 20 01:54:01.716854 kubelet[3123]: I0120 01:54:01.708469 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/285811f9-e547-431f-a7b0-90e1226d2f4d-goldmane-ca-bundle\") pod \"goldmane-666569f655-szpvj\" (UID: \"285811f9-e547-431f-a7b0-90e1226d2f4d\") " pod="calico-system/goldmane-666569f655-szpvj" Jan 20 01:54:01.716854 kubelet[3123]: I0120 01:54:01.708507 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8m84\" (UniqueName: \"kubernetes.io/projected/05750e4a-a6e9-4631-9a1f-786fc076da7e-kube-api-access-p8m84\") pod \"coredns-674b8bbfcf-hknvv\" (UID: \"05750e4a-a6e9-4631-9a1f-786fc076da7e\") " pod="kube-system/coredns-674b8bbfcf-hknvv" Jan 20 01:54:01.725812 kubelet[3123]: I0120 01:54:01.722218 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/285811f9-e547-431f-a7b0-90e1226d2f4d-goldmane-key-pair\") pod \"goldmane-666569f655-szpvj\" (UID: \"285811f9-e547-431f-a7b0-90e1226d2f4d\") " pod="calico-system/goldmane-666569f655-szpvj" Jan 20 01:54:01.725812 kubelet[3123]: I0120 01:54:01.725473 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dce8f61b-70a0-47ff-b7a3-9a49a15c7261-config-volume\") pod \"coredns-674b8bbfcf-ft2sl\" (UID: \"dce8f61b-70a0-47ff-b7a3-9a49a15c7261\") " pod="kube-system/coredns-674b8bbfcf-ft2sl" Jan 20 01:54:01.725812 kubelet[3123]: I0120 01:54:01.725532 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hwgh\" (UniqueName: \"kubernetes.io/projected/dce8f61b-70a0-47ff-b7a3-9a49a15c7261-kube-api-access-2hwgh\") pod \"coredns-674b8bbfcf-ft2sl\" (UID: \"dce8f61b-70a0-47ff-b7a3-9a49a15c7261\") " pod="kube-system/coredns-674b8bbfcf-ft2sl" Jan 20 01:54:01.725812 kubelet[3123]: I0120 01:54:01.725565 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fd7k\" (UniqueName: \"kubernetes.io/projected/73120413-751d-4a6a-a82b-54ccc2e8bc99-kube-api-access-4fd7k\") pod \"whisker-6455dcb75d-z7fp4\" (UID: \"73120413-751d-4a6a-a82b-54ccc2e8bc99\") " pod="calico-system/whisker-6455dcb75d-z7fp4" Jan 20 01:54:01.725812 kubelet[3123]: I0120 01:54:01.725593 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/285811f9-e547-431f-a7b0-90e1226d2f4d-config\") pod \"goldmane-666569f655-szpvj\" (UID: \"285811f9-e547-431f-a7b0-90e1226d2f4d\") " pod="calico-system/goldmane-666569f655-szpvj" Jan 20 01:54:01.797432 systemd[1]: Created slice kubepods-burstable-poddce8f61b_70a0_47ff_b7a3_9a49a15c7261.slice - libcontainer container kubepods-burstable-poddce8f61b_70a0_47ff_b7a3_9a49a15c7261.slice. Jan 20 01:54:01.834275 kubelet[3123]: I0120 01:54:01.833012 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvpjk\" (UniqueName: \"kubernetes.io/projected/03f653dd-0210-41e9-9d70-a3905826baa1-kube-api-access-fvpjk\") pod \"calico-apiserver-74b798b596-r7ptx\" (UID: \"03f653dd-0210-41e9-9d70-a3905826baa1\") " pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" Jan 20 01:54:01.834275 kubelet[3123]: I0120 01:54:01.833178 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/03f653dd-0210-41e9-9d70-a3905826baa1-calico-apiserver-certs\") pod \"calico-apiserver-74b798b596-r7ptx\" (UID: \"03f653dd-0210-41e9-9d70-a3905826baa1\") " pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" Jan 20 01:54:02.126331 systemd[1]: Created slice kubepods-besteffort-pod285811f9_e547_431f_a7b0_90e1226d2f4d.slice - libcontainer container kubepods-besteffort-pod285811f9_e547_431f_a7b0_90e1226d2f4d.slice. Jan 20 01:54:02.353101 systemd[1]: Created slice kubepods-besteffort-pod68cbc571_4445_4166_912c_8fdfe252aae2.slice - libcontainer container kubepods-besteffort-pod68cbc571_4445_4166_912c_8fdfe252aae2.slice. Jan 20 01:54:03.014832 containerd[1643]: time="2026-01-20T01:54:03.014583198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d7bc7b4f-k5t2j,Uid:68cbc571-4445-4166-912c-8fdfe252aae2,Namespace:calico-system,Attempt:0,}" Jan 20 01:54:03.133468 kubelet[3123]: E0120 01:54:03.133061 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:54:03.200906 containerd[1643]: time="2026-01-20T01:54:03.196768174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ft2sl,Uid:dce8f61b-70a0-47ff-b7a3-9a49a15c7261,Namespace:kube-system,Attempt:0,}" Jan 20 01:54:03.200906 containerd[1643]: time="2026-01-20T01:54:03.197400332Z" level=error msg="Failed to destroy network for sandbox \"c8999d7b1f0ac3af12fdc54d91396c758cb4f91b139e5abdb09e3d123a70a01b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:03.224244 systemd[1]: run-netns-cni\x2dcd74122a\x2d9dbb\x2d1771\x2d10f2\x2d6c4712909250.mount: Deactivated successfully. Jan 20 01:54:03.299427 containerd[1643]: time="2026-01-20T01:54:03.298923148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-szpvj,Uid:285811f9-e547-431f-a7b0-90e1226d2f4d,Namespace:calico-system,Attempt:0,}" Jan 20 01:54:03.839367 systemd[1]: Created slice kubepods-besteffort-pod73120413_751d_4a6a_a82b_54ccc2e8bc99.slice - libcontainer container kubepods-besteffort-pod73120413_751d_4a6a_a82b_54ccc2e8bc99.slice. Jan 20 01:54:04.001170 systemd[1]: Created slice kubepods-besteffort-pod589f656f_1e0a_4667_bc0d_42908aab3340.slice - libcontainer container kubepods-besteffort-pod589f656f_1e0a_4667_bc0d_42908aab3340.slice. Jan 20 01:54:04.102964 containerd[1643]: time="2026-01-20T01:54:04.102818746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6455dcb75d-z7fp4,Uid:73120413-751d-4a6a-a82b-54ccc2e8bc99,Namespace:calico-system,Attempt:0,}" Jan 20 01:54:04.207147 containerd[1643]: time="2026-01-20T01:54:04.206424702Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x6f5h,Uid:eeb09d5e-8a63-4fca-910b-ea49fa1ecf05,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8999d7b1f0ac3af12fdc54d91396c758cb4f91b139e5abdb09e3d123a70a01b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:04.225318 kubelet[3123]: E0120 01:54:04.215140 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8999d7b1f0ac3af12fdc54d91396c758cb4f91b139e5abdb09e3d123a70a01b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:04.225318 kubelet[3123]: E0120 01:54:04.215283 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8999d7b1f0ac3af12fdc54d91396c758cb4f91b139e5abdb09e3d123a70a01b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x6f5h" Jan 20 01:54:04.225318 kubelet[3123]: E0120 01:54:04.215325 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8999d7b1f0ac3af12fdc54d91396c758cb4f91b139e5abdb09e3d123a70a01b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x6f5h" Jan 20 01:54:04.226243 kubelet[3123]: E0120 01:54:04.215398 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x6f5h_calico-system(eeb09d5e-8a63-4fca-910b-ea49fa1ecf05)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x6f5h_calico-system(eeb09d5e-8a63-4fca-910b-ea49fa1ecf05)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8999d7b1f0ac3af12fdc54d91396c758cb4f91b139e5abdb09e3d123a70a01b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:54:04.438009 systemd[1]: Created slice kubepods-burstable-pod05750e4a_a6e9_4631_9a1f_786fc076da7e.slice - libcontainer container kubepods-burstable-pod05750e4a_a6e9_4631_9a1f_786fc076da7e.slice. Jan 20 01:54:04.458186 containerd[1643]: time="2026-01-20T01:54:04.457781498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-wbvft,Uid:589f656f-1e0a-4667-bc0d-42908aab3340,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:54:04.694043 kubelet[3123]: E0120 01:54:04.672180 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:54:04.714553 containerd[1643]: time="2026-01-20T01:54:04.710037921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hknvv,Uid:05750e4a-a6e9-4631-9a1f-786fc076da7e,Namespace:kube-system,Attempt:0,}" Jan 20 01:54:04.998782 systemd[1]: Created slice kubepods-besteffort-pod03f653dd_0210_41e9_9d70_a3905826baa1.slice - libcontainer container kubepods-besteffort-pod03f653dd_0210_41e9_9d70_a3905826baa1.slice. Jan 20 01:54:05.133259 containerd[1643]: time="2026-01-20T01:54:05.128996914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-r7ptx,Uid:03f653dd-0210-41e9-9d70-a3905826baa1,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:54:08.618593 containerd[1643]: time="2026-01-20T01:54:08.591184531Z" level=error msg="Failed to destroy network for sandbox \"b65315b1176007e5e41aeba079b3df276092a751372b2039a717ff0fd92204ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:08.631860 systemd[1]: run-netns-cni\x2d3ec4acba\x2d29f5\x2db259\x2d2ae3\x2d06781780ccd2.mount: Deactivated successfully. Jan 20 01:54:09.008323 containerd[1643]: time="2026-01-20T01:54:09.006078310Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-szpvj,Uid:285811f9-e547-431f-a7b0-90e1226d2f4d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b65315b1176007e5e41aeba079b3df276092a751372b2039a717ff0fd92204ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:09.024320 kubelet[3123]: E0120 01:54:09.023497 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b65315b1176007e5e41aeba079b3df276092a751372b2039a717ff0fd92204ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:09.028671 kubelet[3123]: E0120 01:54:09.025762 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b65315b1176007e5e41aeba079b3df276092a751372b2039a717ff0fd92204ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-szpvj" Jan 20 01:54:09.028671 kubelet[3123]: E0120 01:54:09.028302 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b65315b1176007e5e41aeba079b3df276092a751372b2039a717ff0fd92204ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-szpvj" Jan 20 01:54:09.028671 kubelet[3123]: E0120 01:54:09.028410 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-szpvj_calico-system(285811f9-e547-431f-a7b0-90e1226d2f4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-szpvj_calico-system(285811f9-e547-431f-a7b0-90e1226d2f4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b65315b1176007e5e41aeba079b3df276092a751372b2039a717ff0fd92204ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 01:54:09.271947 containerd[1643]: time="2026-01-20T01:54:09.267152263Z" level=error msg="Failed to destroy network for sandbox \"7531b85eae34b21d5fe7115482d5e437efb3f3595fc5a45f7da8ccf52e14c322\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:09.296061 systemd[1]: run-netns-cni\x2d4a7c46f7\x2d9f35\x2df234\x2d3fb5\x2d152d17da769a.mount: Deactivated successfully. Jan 20 01:54:09.326813 containerd[1643]: time="2026-01-20T01:54:09.326668486Z" level=error msg="Failed to destroy network for sandbox \"f8449a3593484100b50c3729d65607d71c5dce4faf6402e84ae53830d8bcb156\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:09.347835 containerd[1643]: time="2026-01-20T01:54:09.336329427Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ft2sl,Uid:dce8f61b-70a0-47ff-b7a3-9a49a15c7261,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7531b85eae34b21d5fe7115482d5e437efb3f3595fc5a45f7da8ccf52e14c322\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:09.347835 containerd[1643]: time="2026-01-20T01:54:09.344955614Z" level=error msg="Failed to destroy network for sandbox \"50257ba9ffff55d8b09f248147b538f4bad01bfde88de2aa66c6d23e26d527c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:09.347220 systemd[1]: run-netns-cni\x2dfe64c83d\x2da95e\x2d6f9e\x2d0843\x2de0d27ea8b055.mount: Deactivated successfully. Jan 20 01:54:09.348372 kubelet[3123]: E0120 01:54:09.338289 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7531b85eae34b21d5fe7115482d5e437efb3f3595fc5a45f7da8ccf52e14c322\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:09.348372 kubelet[3123]: E0120 01:54:09.338376 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7531b85eae34b21d5fe7115482d5e437efb3f3595fc5a45f7da8ccf52e14c322\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ft2sl" Jan 20 01:54:09.348372 kubelet[3123]: E0120 01:54:09.338407 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7531b85eae34b21d5fe7115482d5e437efb3f3595fc5a45f7da8ccf52e14c322\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ft2sl" Jan 20 01:54:09.348562 kubelet[3123]: E0120 01:54:09.338530 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-ft2sl_kube-system(dce8f61b-70a0-47ff-b7a3-9a49a15c7261)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-ft2sl_kube-system(dce8f61b-70a0-47ff-b7a3-9a49a15c7261)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7531b85eae34b21d5fe7115482d5e437efb3f3595fc5a45f7da8ccf52e14c322\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-ft2sl" podUID="dce8f61b-70a0-47ff-b7a3-9a49a15c7261" Jan 20 01:54:09.357944 systemd[1]: run-netns-cni\x2d90437862\x2d947b\x2d33c2\x2dc748\x2daf0a0f1c6b95.mount: Deactivated successfully. Jan 20 01:54:09.394860 containerd[1643]: time="2026-01-20T01:54:09.388922762Z" level=error msg="Failed to destroy network for sandbox \"2132e3266cf6293c391b41ce51e6679d1b5b91d76dc076d7e9edc85575e10329\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:09.396530 containerd[1643]: time="2026-01-20T01:54:09.396484176Z" level=error msg="Failed to destroy network for sandbox \"97998cf53391a0eee50eae59ea4f2cc368a7159cec371863829403d973e799a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:09.404805 systemd[1]: run-netns-cni\x2db294ff5c\x2d2e45\x2d2526\x2d918b\x2d28175315e7e3.mount: Deactivated successfully. Jan 20 01:54:09.424433 containerd[1643]: time="2026-01-20T01:54:09.424364460Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d7bc7b4f-k5t2j,Uid:68cbc571-4445-4166-912c-8fdfe252aae2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"50257ba9ffff55d8b09f248147b538f4bad01bfde88de2aa66c6d23e26d527c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:09.445227 kubelet[3123]: E0120 01:54:09.439391 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50257ba9ffff55d8b09f248147b538f4bad01bfde88de2aa66c6d23e26d527c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:09.445227 kubelet[3123]: E0120 01:54:09.439480 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50257ba9ffff55d8b09f248147b538f4bad01bfde88de2aa66c6d23e26d527c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" Jan 20 01:54:09.445227 kubelet[3123]: E0120 01:54:09.439516 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50257ba9ffff55d8b09f248147b538f4bad01bfde88de2aa66c6d23e26d527c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" Jan 20 01:54:09.446349 kubelet[3123]: E0120 01:54:09.439657 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86d7bc7b4f-k5t2j_calico-system(68cbc571-4445-4166-912c-8fdfe252aae2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86d7bc7b4f-k5t2j_calico-system(68cbc571-4445-4166-912c-8fdfe252aae2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"50257ba9ffff55d8b09f248147b538f4bad01bfde88de2aa66c6d23e26d527c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 01:54:09.482254 containerd[1643]: time="2026-01-20T01:54:09.482133756Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6455dcb75d-z7fp4,Uid:73120413-751d-4a6a-a82b-54ccc2e8bc99,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8449a3593484100b50c3729d65607d71c5dce4faf6402e84ae53830d8bcb156\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:09.497454 containerd[1643]: time="2026-01-20T01:54:09.490648308Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hknvv,Uid:05750e4a-a6e9-4631-9a1f-786fc076da7e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2132e3266cf6293c391b41ce51e6679d1b5b91d76dc076d7e9edc85575e10329\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:09.505240 containerd[1643]: time="2026-01-20T01:54:09.504893142Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-r7ptx,Uid:03f653dd-0210-41e9-9d70-a3905826baa1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"97998cf53391a0eee50eae59ea4f2cc368a7159cec371863829403d973e799a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:09.512286 kubelet[3123]: E0120 01:54:09.512226 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2132e3266cf6293c391b41ce51e6679d1b5b91d76dc076d7e9edc85575e10329\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:09.527928 kubelet[3123]: E0120 01:54:09.526913 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2132e3266cf6293c391b41ce51e6679d1b5b91d76dc076d7e9edc85575e10329\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hknvv" Jan 20 01:54:09.527928 kubelet[3123]: E0120 01:54:09.527001 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2132e3266cf6293c391b41ce51e6679d1b5b91d76dc076d7e9edc85575e10329\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hknvv" Jan 20 01:54:09.527928 kubelet[3123]: E0120 01:54:09.512226 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97998cf53391a0eee50eae59ea4f2cc368a7159cec371863829403d973e799a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:09.527928 kubelet[3123]: E0120 01:54:09.527406 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97998cf53391a0eee50eae59ea4f2cc368a7159cec371863829403d973e799a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" Jan 20 01:54:09.528255 kubelet[3123]: E0120 01:54:09.527101 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-hknvv_kube-system(05750e4a-a6e9-4631-9a1f-786fc076da7e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-hknvv_kube-system(05750e4a-a6e9-4631-9a1f-786fc076da7e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2132e3266cf6293c391b41ce51e6679d1b5b91d76dc076d7e9edc85575e10329\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-hknvv" podUID="05750e4a-a6e9-4631-9a1f-786fc076da7e" Jan 20 01:54:09.528255 kubelet[3123]: E0120 01:54:09.514183 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8449a3593484100b50c3729d65607d71c5dce4faf6402e84ae53830d8bcb156\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:09.528255 kubelet[3123]: E0120 01:54:09.528119 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8449a3593484100b50c3729d65607d71c5dce4faf6402e84ae53830d8bcb156\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6455dcb75d-z7fp4" Jan 20 01:54:09.552195 kubelet[3123]: E0120 01:54:09.546123 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97998cf53391a0eee50eae59ea4f2cc368a7159cec371863829403d973e799a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" Jan 20 01:54:09.552195 kubelet[3123]: E0120 01:54:09.546301 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74b798b596-r7ptx_calico-apiserver(03f653dd-0210-41e9-9d70-a3905826baa1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74b798b596-r7ptx_calico-apiserver(03f653dd-0210-41e9-9d70-a3905826baa1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97998cf53391a0eee50eae59ea4f2cc368a7159cec371863829403d973e799a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 01:54:09.552195 kubelet[3123]: E0120 01:54:09.546374 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8449a3593484100b50c3729d65607d71c5dce4faf6402e84ae53830d8bcb156\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6455dcb75d-z7fp4" Jan 20 01:54:09.568064 kubelet[3123]: E0120 01:54:09.546425 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6455dcb75d-z7fp4_calico-system(73120413-751d-4a6a-a82b-54ccc2e8bc99)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6455dcb75d-z7fp4_calico-system(73120413-751d-4a6a-a82b-54ccc2e8bc99)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f8449a3593484100b50c3729d65607d71c5dce4faf6402e84ae53830d8bcb156\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6455dcb75d-z7fp4" podUID="73120413-751d-4a6a-a82b-54ccc2e8bc99" Jan 20 01:54:09.642030 systemd[1]: run-netns-cni\x2d6763a706\x2d2774\x2d9a28\x2d37ca\x2d68def8bc70e4.mount: Deactivated successfully. Jan 20 01:54:09.759361 containerd[1643]: time="2026-01-20T01:54:09.759295417Z" level=error msg="Failed to destroy network for sandbox \"ff72cf3942cec7f506d9e54e2b100d6318cd87e07dafdca457abaa1306be46db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:09.794425 systemd[1]: run-netns-cni\x2dbc44500c\x2dcb67\x2de084\x2d7143\x2de0a903b2c484.mount: Deactivated successfully. Jan 20 01:54:09.858388 containerd[1643]: time="2026-01-20T01:54:09.844933749Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-wbvft,Uid:589f656f-1e0a-4667-bc0d-42908aab3340,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff72cf3942cec7f506d9e54e2b100d6318cd87e07dafdca457abaa1306be46db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:09.908507 kubelet[3123]: E0120 01:54:09.891806 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff72cf3942cec7f506d9e54e2b100d6318cd87e07dafdca457abaa1306be46db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:09.908507 kubelet[3123]: E0120 01:54:09.892016 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff72cf3942cec7f506d9e54e2b100d6318cd87e07dafdca457abaa1306be46db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" Jan 20 01:54:09.908507 kubelet[3123]: E0120 01:54:09.892057 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff72cf3942cec7f506d9e54e2b100d6318cd87e07dafdca457abaa1306be46db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" Jan 20 01:54:09.914500 kubelet[3123]: E0120 01:54:09.892129 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74b798b596-wbvft_calico-apiserver(589f656f-1e0a-4667-bc0d-42908aab3340)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74b798b596-wbvft_calico-apiserver(589f656f-1e0a-4667-bc0d-42908aab3340)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff72cf3942cec7f506d9e54e2b100d6318cd87e07dafdca457abaa1306be46db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 01:54:19.849199 kubelet[3123]: E0120 01:54:19.846542 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:54:20.016284 containerd[1643]: time="2026-01-20T01:54:20.001035800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ft2sl,Uid:dce8f61b-70a0-47ff-b7a3-9a49a15c7261,Namespace:kube-system,Attempt:0,}" Jan 20 01:54:20.016284 containerd[1643]: time="2026-01-20T01:54:20.010920440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x6f5h,Uid:eeb09d5e-8a63-4fca-910b-ea49fa1ecf05,Namespace:calico-system,Attempt:0,}" Jan 20 01:54:20.792465 containerd[1643]: time="2026-01-20T01:54:20.787926756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6455dcb75d-z7fp4,Uid:73120413-751d-4a6a-a82b-54ccc2e8bc99,Namespace:calico-system,Attempt:0,}" Jan 20 01:54:21.829255 containerd[1643]: time="2026-01-20T01:54:21.829148518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-r7ptx,Uid:03f653dd-0210-41e9-9d70-a3905826baa1,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:54:21.855106 containerd[1643]: time="2026-01-20T01:54:21.831340325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-wbvft,Uid:589f656f-1e0a-4667-bc0d-42908aab3340,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:54:22.218544 containerd[1643]: time="2026-01-20T01:54:22.218007808Z" level=error msg="Failed to destroy network for sandbox \"b79d20bb70cf1dfca4f50390dcc7cfa5eb629c9a544a1cd017e8398016945164\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:22.254137 systemd[1]: run-netns-cni\x2dd85525ed\x2dd1a6\x2d024d\x2d4f07\x2daa484996864d.mount: Deactivated successfully. Jan 20 01:54:22.468542 containerd[1643]: time="2026-01-20T01:54:22.464127361Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ft2sl,Uid:dce8f61b-70a0-47ff-b7a3-9a49a15c7261,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b79d20bb70cf1dfca4f50390dcc7cfa5eb629c9a544a1cd017e8398016945164\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:22.490659 kubelet[3123]: E0120 01:54:22.489512 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b79d20bb70cf1dfca4f50390dcc7cfa5eb629c9a544a1cd017e8398016945164\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:22.497785 kubelet[3123]: E0120 01:54:22.492387 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b79d20bb70cf1dfca4f50390dcc7cfa5eb629c9a544a1cd017e8398016945164\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ft2sl" Jan 20 01:54:22.500829 kubelet[3123]: E0120 01:54:22.498854 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b79d20bb70cf1dfca4f50390dcc7cfa5eb629c9a544a1cd017e8398016945164\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ft2sl" Jan 20 01:54:22.500829 kubelet[3123]: E0120 01:54:22.499249 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-ft2sl_kube-system(dce8f61b-70a0-47ff-b7a3-9a49a15c7261)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-ft2sl_kube-system(dce8f61b-70a0-47ff-b7a3-9a49a15c7261)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b79d20bb70cf1dfca4f50390dcc7cfa5eb629c9a544a1cd017e8398016945164\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-ft2sl" podUID="dce8f61b-70a0-47ff-b7a3-9a49a15c7261" Jan 20 01:54:22.521215 containerd[1643]: time="2026-01-20T01:54:22.521017650Z" level=error msg="Failed to destroy network for sandbox \"6fbad9c506d9fede2392870a120b24cb141a4c79c7e01342330f26a720057048\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:22.551907 systemd[1]: run-netns-cni\x2d6e3f99e9\x2de6a7\x2d0e9a\x2d018d\x2de4df529780ee.mount: Deactivated successfully. Jan 20 01:54:22.637650 containerd[1643]: time="2026-01-20T01:54:22.634220426Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x6f5h,Uid:eeb09d5e-8a63-4fca-910b-ea49fa1ecf05,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fbad9c506d9fede2392870a120b24cb141a4c79c7e01342330f26a720057048\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:22.638104 kubelet[3123]: E0120 01:54:22.636988 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fbad9c506d9fede2392870a120b24cb141a4c79c7e01342330f26a720057048\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:22.638104 kubelet[3123]: E0120 01:54:22.637091 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fbad9c506d9fede2392870a120b24cb141a4c79c7e01342330f26a720057048\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x6f5h" Jan 20 01:54:22.638104 kubelet[3123]: E0120 01:54:22.637130 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fbad9c506d9fede2392870a120b24cb141a4c79c7e01342330f26a720057048\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x6f5h" Jan 20 01:54:22.638297 kubelet[3123]: E0120 01:54:22.637200 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x6f5h_calico-system(eeb09d5e-8a63-4fca-910b-ea49fa1ecf05)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x6f5h_calico-system(eeb09d5e-8a63-4fca-910b-ea49fa1ecf05)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6fbad9c506d9fede2392870a120b24cb141a4c79c7e01342330f26a720057048\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:54:22.800604 kubelet[3123]: E0120 01:54:22.787335 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:54:22.800873 containerd[1643]: time="2026-01-20T01:54:22.795306034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-szpvj,Uid:285811f9-e547-431f-a7b0-90e1226d2f4d,Namespace:calico-system,Attempt:0,}" Jan 20 01:54:22.863562 containerd[1643]: time="2026-01-20T01:54:22.862775887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hknvv,Uid:05750e4a-a6e9-4631-9a1f-786fc076da7e,Namespace:kube-system,Attempt:0,}" Jan 20 01:54:23.810207 containerd[1643]: time="2026-01-20T01:54:23.810108131Z" level=error msg="Failed to destroy network for sandbox \"e7c5b4e2f63e06730f199a8c01ff3c69af684ce2faf0e85bf877c92a8b370635\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:23.845310 systemd[1]: run-netns-cni\x2d36780428\x2d807b\x2d273b\x2d12b6\x2ddb832e7fa14d.mount: Deactivated successfully. Jan 20 01:54:23.890488 containerd[1643]: time="2026-01-20T01:54:23.888441809Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6455dcb75d-z7fp4,Uid:73120413-751d-4a6a-a82b-54ccc2e8bc99,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7c5b4e2f63e06730f199a8c01ff3c69af684ce2faf0e85bf877c92a8b370635\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:23.896663 kubelet[3123]: E0120 01:54:23.894431 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7c5b4e2f63e06730f199a8c01ff3c69af684ce2faf0e85bf877c92a8b370635\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:23.896663 kubelet[3123]: E0120 01:54:23.894562 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7c5b4e2f63e06730f199a8c01ff3c69af684ce2faf0e85bf877c92a8b370635\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6455dcb75d-z7fp4" Jan 20 01:54:23.896663 kubelet[3123]: E0120 01:54:23.894597 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7c5b4e2f63e06730f199a8c01ff3c69af684ce2faf0e85bf877c92a8b370635\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6455dcb75d-z7fp4" Jan 20 01:54:23.897475 kubelet[3123]: E0120 01:54:23.894759 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6455dcb75d-z7fp4_calico-system(73120413-751d-4a6a-a82b-54ccc2e8bc99)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6455dcb75d-z7fp4_calico-system(73120413-751d-4a6a-a82b-54ccc2e8bc99)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7c5b4e2f63e06730f199a8c01ff3c69af684ce2faf0e85bf877c92a8b370635\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6455dcb75d-z7fp4" podUID="73120413-751d-4a6a-a82b-54ccc2e8bc99" Jan 20 01:54:24.075813 containerd[1643]: time="2026-01-20T01:54:24.015268860Z" level=error msg="Failed to destroy network for sandbox \"523fabbaf6172f12d0c3bb45722384d3016fdacdfe56bf4672eab9e14a264522\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:24.075813 containerd[1643]: time="2026-01-20T01:54:24.064147030Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-r7ptx,Uid:03f653dd-0210-41e9-9d70-a3905826baa1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"523fabbaf6172f12d0c3bb45722384d3016fdacdfe56bf4672eab9e14a264522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:24.076186 kubelet[3123]: E0120 01:54:24.065266 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"523fabbaf6172f12d0c3bb45722384d3016fdacdfe56bf4672eab9e14a264522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:24.076186 kubelet[3123]: E0120 01:54:24.065577 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"523fabbaf6172f12d0c3bb45722384d3016fdacdfe56bf4672eab9e14a264522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" Jan 20 01:54:24.076186 kubelet[3123]: E0120 01:54:24.065624 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"523fabbaf6172f12d0c3bb45722384d3016fdacdfe56bf4672eab9e14a264522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" Jan 20 01:54:24.077821 kubelet[3123]: E0120 01:54:24.065786 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74b798b596-r7ptx_calico-apiserver(03f653dd-0210-41e9-9d70-a3905826baa1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74b798b596-r7ptx_calico-apiserver(03f653dd-0210-41e9-9d70-a3905826baa1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"523fabbaf6172f12d0c3bb45722384d3016fdacdfe56bf4672eab9e14a264522\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 01:54:24.105379 systemd[1]: run-netns-cni\x2d8b7c023b\x2dfead\x2d6a95\x2d338b\x2d91cb2dc653cc.mount: Deactivated successfully. Jan 20 01:54:24.211626 containerd[1643]: time="2026-01-20T01:54:24.182411867Z" level=error msg="Failed to destroy network for sandbox \"3d2bdcce2489f05270aebd6112aabbff6644a55dca811487f2ee1f5deedb57c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:24.198601 systemd[1]: run-netns-cni\x2d2ae2bfae\x2de1c5\x2da560\x2d4375\x2dd4bfc613ce48.mount: Deactivated successfully. Jan 20 01:54:24.256435 containerd[1643]: time="2026-01-20T01:54:24.255393553Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-wbvft,Uid:589f656f-1e0a-4667-bc0d-42908aab3340,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d2bdcce2489f05270aebd6112aabbff6644a55dca811487f2ee1f5deedb57c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:24.256797 kubelet[3123]: E0120 01:54:24.256659 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d2bdcce2489f05270aebd6112aabbff6644a55dca811487f2ee1f5deedb57c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:24.256918 kubelet[3123]: E0120 01:54:24.256827 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d2bdcce2489f05270aebd6112aabbff6644a55dca811487f2ee1f5deedb57c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" Jan 20 01:54:24.256918 kubelet[3123]: E0120 01:54:24.256863 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d2bdcce2489f05270aebd6112aabbff6644a55dca811487f2ee1f5deedb57c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" Jan 20 01:54:24.257004 kubelet[3123]: E0120 01:54:24.256933 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74b798b596-wbvft_calico-apiserver(589f656f-1e0a-4667-bc0d-42908aab3340)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74b798b596-wbvft_calico-apiserver(589f656f-1e0a-4667-bc0d-42908aab3340)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d2bdcce2489f05270aebd6112aabbff6644a55dca811487f2ee1f5deedb57c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 01:54:24.804073 containerd[1643]: time="2026-01-20T01:54:24.803911682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d7bc7b4f-k5t2j,Uid:68cbc571-4445-4166-912c-8fdfe252aae2,Namespace:calico-system,Attempt:0,}" Jan 20 01:54:24.910293 containerd[1643]: time="2026-01-20T01:54:24.889476264Z" level=error msg="Failed to destroy network for sandbox \"8161a87d0d211ea8b2919ba452e26ef4b6c0b4a870ada9df153e12170aea93e2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:25.042211 systemd[1]: run-netns-cni\x2dfa28f28c\x2db21a\x2d4710\x2d1fba\x2d054870417570.mount: Deactivated successfully. Jan 20 01:54:25.091108 kubelet[3123]: E0120 01:54:25.054653 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8161a87d0d211ea8b2919ba452e26ef4b6c0b4a870ada9df153e12170aea93e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:25.091108 kubelet[3123]: E0120 01:54:25.054841 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8161a87d0d211ea8b2919ba452e26ef4b6c0b4a870ada9df153e12170aea93e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-szpvj" Jan 20 01:54:25.091108 kubelet[3123]: E0120 01:54:25.054880 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8161a87d0d211ea8b2919ba452e26ef4b6c0b4a870ada9df153e12170aea93e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-szpvj" Jan 20 01:54:25.091990 containerd[1643]: time="2026-01-20T01:54:25.047659571Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-szpvj,Uid:285811f9-e547-431f-a7b0-90e1226d2f4d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8161a87d0d211ea8b2919ba452e26ef4b6c0b4a870ada9df153e12170aea93e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:25.092124 kubelet[3123]: E0120 01:54:25.054955 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-szpvj_calico-system(285811f9-e547-431f-a7b0-90e1226d2f4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-szpvj_calico-system(285811f9-e547-431f-a7b0-90e1226d2f4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8161a87d0d211ea8b2919ba452e26ef4b6c0b4a870ada9df153e12170aea93e2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 01:54:25.107912 containerd[1643]: time="2026-01-20T01:54:25.107832016Z" level=error msg="Failed to destroy network for sandbox \"b0f373690a4b739a43127a8ac0bb8b4474e7d7ae98780225828ce6c355001f30\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:25.132028 systemd[1]: run-netns-cni\x2d06953b28\x2d65e5\x2da414\x2def02\x2dc54751f29b8c.mount: Deactivated successfully. Jan 20 01:54:25.142404 containerd[1643]: time="2026-01-20T01:54:25.140587142Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hknvv,Uid:05750e4a-a6e9-4631-9a1f-786fc076da7e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0f373690a4b739a43127a8ac0bb8b4474e7d7ae98780225828ce6c355001f30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:25.152488 kubelet[3123]: E0120 01:54:25.144489 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0f373690a4b739a43127a8ac0bb8b4474e7d7ae98780225828ce6c355001f30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:25.152488 kubelet[3123]: E0120 01:54:25.149134 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0f373690a4b739a43127a8ac0bb8b4474e7d7ae98780225828ce6c355001f30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hknvv" Jan 20 01:54:25.152488 kubelet[3123]: E0120 01:54:25.149318 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0f373690a4b739a43127a8ac0bb8b4474e7d7ae98780225828ce6c355001f30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hknvv" Jan 20 01:54:25.152888 kubelet[3123]: E0120 01:54:25.149788 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-hknvv_kube-system(05750e4a-a6e9-4631-9a1f-786fc076da7e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-hknvv_kube-system(05750e4a-a6e9-4631-9a1f-786fc076da7e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b0f373690a4b739a43127a8ac0bb8b4474e7d7ae98780225828ce6c355001f30\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-hknvv" podUID="05750e4a-a6e9-4631-9a1f-786fc076da7e" Jan 20 01:54:26.820862 containerd[1643]: time="2026-01-20T01:54:26.809827605Z" level=error msg="Failed to destroy network for sandbox \"451277d34b328e58641fb887f5e9a22c53327266bd99bd42d0437096a4bffe05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:26.847639 systemd[1]: run-netns-cni\x2da0ae3dcc\x2db4dd\x2dc1ca\x2d00a2\x2dcb26fc70199d.mount: Deactivated successfully. Jan 20 01:54:26.926899 containerd[1643]: time="2026-01-20T01:54:26.926814841Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d7bc7b4f-k5t2j,Uid:68cbc571-4445-4166-912c-8fdfe252aae2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"451277d34b328e58641fb887f5e9a22c53327266bd99bd42d0437096a4bffe05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:26.947257 kubelet[3123]: E0120 01:54:26.932475 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"451277d34b328e58641fb887f5e9a22c53327266bd99bd42d0437096a4bffe05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:26.947257 kubelet[3123]: E0120 01:54:26.941808 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"451277d34b328e58641fb887f5e9a22c53327266bd99bd42d0437096a4bffe05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" Jan 20 01:54:26.947257 kubelet[3123]: E0120 01:54:26.941864 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"451277d34b328e58641fb887f5e9a22c53327266bd99bd42d0437096a4bffe05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" Jan 20 01:54:26.957971 kubelet[3123]: E0120 01:54:26.941982 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86d7bc7b4f-k5t2j_calico-system(68cbc571-4445-4166-912c-8fdfe252aae2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86d7bc7b4f-k5t2j_calico-system(68cbc571-4445-4166-912c-8fdfe252aae2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"451277d34b328e58641fb887f5e9a22c53327266bd99bd42d0437096a4bffe05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 01:54:33.802798 kubelet[3123]: E0120 01:54:33.802505 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:54:33.871878 containerd[1643]: time="2026-01-20T01:54:33.852296536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ft2sl,Uid:dce8f61b-70a0-47ff-b7a3-9a49a15c7261,Namespace:kube-system,Attempt:0,}" Jan 20 01:54:34.791856 kubelet[3123]: E0120 01:54:34.788605 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:54:34.980587 containerd[1643]: time="2026-01-20T01:54:34.979868554Z" level=error msg="Failed to destroy network for sandbox \"486a8308894ae5b34a6681db6fcd9a952af131a373118497d3590ad69fc82ff4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:35.037349 systemd[1]: run-netns-cni\x2df08573d4\x2d582f\x2d030c\x2dfac5\x2d1f87b4af3d91.mount: Deactivated successfully. Jan 20 01:54:35.049429 containerd[1643]: time="2026-01-20T01:54:35.049217015Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ft2sl,Uid:dce8f61b-70a0-47ff-b7a3-9a49a15c7261,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"486a8308894ae5b34a6681db6fcd9a952af131a373118497d3590ad69fc82ff4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:35.059455 kubelet[3123]: E0120 01:54:35.053305 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"486a8308894ae5b34a6681db6fcd9a952af131a373118497d3590ad69fc82ff4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:35.059455 kubelet[3123]: E0120 01:54:35.053403 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"486a8308894ae5b34a6681db6fcd9a952af131a373118497d3590ad69fc82ff4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ft2sl" Jan 20 01:54:35.059455 kubelet[3123]: E0120 01:54:35.053440 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"486a8308894ae5b34a6681db6fcd9a952af131a373118497d3590ad69fc82ff4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ft2sl" Jan 20 01:54:35.065335 kubelet[3123]: E0120 01:54:35.055377 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-ft2sl_kube-system(dce8f61b-70a0-47ff-b7a3-9a49a15c7261)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-ft2sl_kube-system(dce8f61b-70a0-47ff-b7a3-9a49a15c7261)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"486a8308894ae5b34a6681db6fcd9a952af131a373118497d3590ad69fc82ff4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-ft2sl" podUID="dce8f61b-70a0-47ff-b7a3-9a49a15c7261" Jan 20 01:54:36.791951 containerd[1643]: time="2026-01-20T01:54:36.785823671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-szpvj,Uid:285811f9-e547-431f-a7b0-90e1226d2f4d,Namespace:calico-system,Attempt:0,}" Jan 20 01:54:37.814964 containerd[1643]: time="2026-01-20T01:54:37.814391185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-wbvft,Uid:589f656f-1e0a-4667-bc0d-42908aab3340,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:54:37.998764 containerd[1643]: time="2026-01-20T01:54:37.997861752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x6f5h,Uid:eeb09d5e-8a63-4fca-910b-ea49fa1ecf05,Namespace:calico-system,Attempt:0,}" Jan 20 01:54:38.052321 kubelet[3123]: E0120 01:54:38.030503 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:54:38.080832 containerd[1643]: time="2026-01-20T01:54:38.078798447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-r7ptx,Uid:03f653dd-0210-41e9-9d70-a3905826baa1,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:54:38.334906 containerd[1643]: time="2026-01-20T01:54:38.331876743Z" level=error msg="Failed to destroy network for sandbox \"5e93a923bdcce9388e9a21a21ab7e8475c9e450d2858e60cfbcb0f600e3c9c87\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:38.355339 systemd[1]: run-netns-cni\x2d2c21b079\x2d3fd6\x2d81c1\x2d2e86\x2d016da33eec59.mount: Deactivated successfully. Jan 20 01:54:38.419188 containerd[1643]: time="2026-01-20T01:54:38.419119680Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-szpvj,Uid:285811f9-e547-431f-a7b0-90e1226d2f4d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e93a923bdcce9388e9a21a21ab7e8475c9e450d2858e60cfbcb0f600e3c9c87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:38.443767 kubelet[3123]: E0120 01:54:38.441866 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e93a923bdcce9388e9a21a21ab7e8475c9e450d2858e60cfbcb0f600e3c9c87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:38.443767 kubelet[3123]: E0120 01:54:38.441974 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e93a923bdcce9388e9a21a21ab7e8475c9e450d2858e60cfbcb0f600e3c9c87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-szpvj" Jan 20 01:54:38.443767 kubelet[3123]: E0120 01:54:38.442015 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e93a923bdcce9388e9a21a21ab7e8475c9e450d2858e60cfbcb0f600e3c9c87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-szpvj" Jan 20 01:54:38.444031 kubelet[3123]: E0120 01:54:38.442086 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-szpvj_calico-system(285811f9-e547-431f-a7b0-90e1226d2f4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-szpvj_calico-system(285811f9-e547-431f-a7b0-90e1226d2f4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e93a923bdcce9388e9a21a21ab7e8475c9e450d2858e60cfbcb0f600e3c9c87\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 01:54:38.790094 containerd[1643]: time="2026-01-20T01:54:38.782370180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6455dcb75d-z7fp4,Uid:73120413-751d-4a6a-a82b-54ccc2e8bc99,Namespace:calico-system,Attempt:0,}" Jan 20 01:54:39.440777 containerd[1643]: time="2026-01-20T01:54:39.435504851Z" level=error msg="Failed to destroy network for sandbox \"0fe2cb51bb867f7003e0f51b8b4e72e60d5ec594288a736f660a4935a4ef765a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:39.471044 containerd[1643]: time="2026-01-20T01:54:39.450063370Z" level=error msg="Failed to destroy network for sandbox \"3611a1ad1ee26a8388d9bfdd29e0186b4ad23d4f5f4ad6f5865159dd4e5ec249\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:39.460286 systemd[1]: run-netns-cni\x2d4f6ca5f9\x2d6534\x2d8740\x2d1135\x2d5a1acae80979.mount: Deactivated successfully. Jan 20 01:54:39.493002 containerd[1643]: time="2026-01-20T01:54:39.490540824Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x6f5h,Uid:eeb09d5e-8a63-4fca-910b-ea49fa1ecf05,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fe2cb51bb867f7003e0f51b8b4e72e60d5ec594288a736f660a4935a4ef765a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:39.532574 kubelet[3123]: E0120 01:54:39.500828 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fe2cb51bb867f7003e0f51b8b4e72e60d5ec594288a736f660a4935a4ef765a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:39.532574 kubelet[3123]: E0120 01:54:39.500965 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fe2cb51bb867f7003e0f51b8b4e72e60d5ec594288a736f660a4935a4ef765a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x6f5h" Jan 20 01:54:39.532574 kubelet[3123]: E0120 01:54:39.501005 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fe2cb51bb867f7003e0f51b8b4e72e60d5ec594288a736f660a4935a4ef765a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x6f5h" Jan 20 01:54:39.512875 systemd[1]: run-netns-cni\x2d252a4dbe\x2d5ec3\x2dd392\x2d8787\x2de411bedc5d20.mount: Deactivated successfully. Jan 20 01:54:39.547300 kubelet[3123]: E0120 01:54:39.501649 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x6f5h_calico-system(eeb09d5e-8a63-4fca-910b-ea49fa1ecf05)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x6f5h_calico-system(eeb09d5e-8a63-4fca-910b-ea49fa1ecf05)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0fe2cb51bb867f7003e0f51b8b4e72e60d5ec594288a736f660a4935a4ef765a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:54:39.582411 containerd[1643]: time="2026-01-20T01:54:39.577428598Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-r7ptx,Uid:03f653dd-0210-41e9-9d70-a3905826baa1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3611a1ad1ee26a8388d9bfdd29e0186b4ad23d4f5f4ad6f5865159dd4e5ec249\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:39.582661 kubelet[3123]: E0120 01:54:39.577950 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3611a1ad1ee26a8388d9bfdd29e0186b4ad23d4f5f4ad6f5865159dd4e5ec249\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:39.582661 kubelet[3123]: E0120 01:54:39.578036 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3611a1ad1ee26a8388d9bfdd29e0186b4ad23d4f5f4ad6f5865159dd4e5ec249\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" Jan 20 01:54:39.582661 kubelet[3123]: E0120 01:54:39.578068 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3611a1ad1ee26a8388d9bfdd29e0186b4ad23d4f5f4ad6f5865159dd4e5ec249\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" Jan 20 01:54:39.582937 kubelet[3123]: E0120 01:54:39.578137 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74b798b596-r7ptx_calico-apiserver(03f653dd-0210-41e9-9d70-a3905826baa1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74b798b596-r7ptx_calico-apiserver(03f653dd-0210-41e9-9d70-a3905826baa1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3611a1ad1ee26a8388d9bfdd29e0186b4ad23d4f5f4ad6f5865159dd4e5ec249\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 01:54:40.041934 containerd[1643]: time="2026-01-20T01:54:40.033816184Z" level=error msg="Failed to destroy network for sandbox \"4f49122467a29e053b99b3e5f24ec8774e4ed0a2304a1aaf27bede4538cf7a09\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:40.062320 containerd[1643]: time="2026-01-20T01:54:40.057223780Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-wbvft,Uid:589f656f-1e0a-4667-bc0d-42908aab3340,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f49122467a29e053b99b3e5f24ec8774e4ed0a2304a1aaf27bede4538cf7a09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:40.063103 kubelet[3123]: E0120 01:54:40.063045 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f49122467a29e053b99b3e5f24ec8774e4ed0a2304a1aaf27bede4538cf7a09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:40.063136 systemd[1]: run-netns-cni\x2d9abc94d2\x2db34c\x2deeb1\x2d8933\x2debc19b4baf2d.mount: Deactivated successfully. Jan 20 01:54:40.064059 kubelet[3123]: E0120 01:54:40.063485 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f49122467a29e053b99b3e5f24ec8774e4ed0a2304a1aaf27bede4538cf7a09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" Jan 20 01:54:40.064059 kubelet[3123]: E0120 01:54:40.063533 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f49122467a29e053b99b3e5f24ec8774e4ed0a2304a1aaf27bede4538cf7a09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" Jan 20 01:54:40.064431 kubelet[3123]: E0120 01:54:40.064390 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74b798b596-wbvft_calico-apiserver(589f656f-1e0a-4667-bc0d-42908aab3340)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74b798b596-wbvft_calico-apiserver(589f656f-1e0a-4667-bc0d-42908aab3340)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f49122467a29e053b99b3e5f24ec8774e4ed0a2304a1aaf27bede4538cf7a09\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 01:54:40.109526 containerd[1643]: time="2026-01-20T01:54:40.090812249Z" level=error msg="Failed to destroy network for sandbox \"f7e9470106552bf7fe6d9bbc83db104ef41d1df7ab9aa5277da9eff29136e951\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:40.144858 systemd[1]: run-netns-cni\x2dba403828\x2dd15f\x2da796\x2d098d\x2d756df51f6909.mount: Deactivated successfully. Jan 20 01:54:40.226358 containerd[1643]: time="2026-01-20T01:54:40.224633621Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6455dcb75d-z7fp4,Uid:73120413-751d-4a6a-a82b-54ccc2e8bc99,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7e9470106552bf7fe6d9bbc83db104ef41d1df7ab9aa5277da9eff29136e951\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:40.226598 kubelet[3123]: E0120 01:54:40.225380 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7e9470106552bf7fe6d9bbc83db104ef41d1df7ab9aa5277da9eff29136e951\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:40.226598 kubelet[3123]: E0120 01:54:40.225475 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7e9470106552bf7fe6d9bbc83db104ef41d1df7ab9aa5277da9eff29136e951\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6455dcb75d-z7fp4" Jan 20 01:54:40.226598 kubelet[3123]: E0120 01:54:40.225512 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7e9470106552bf7fe6d9bbc83db104ef41d1df7ab9aa5277da9eff29136e951\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6455dcb75d-z7fp4" Jan 20 01:54:40.226847 kubelet[3123]: E0120 01:54:40.225582 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6455dcb75d-z7fp4_calico-system(73120413-751d-4a6a-a82b-54ccc2e8bc99)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6455dcb75d-z7fp4_calico-system(73120413-751d-4a6a-a82b-54ccc2e8bc99)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f7e9470106552bf7fe6d9bbc83db104ef41d1df7ab9aa5277da9eff29136e951\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6455dcb75d-z7fp4" podUID="73120413-751d-4a6a-a82b-54ccc2e8bc99" Jan 20 01:54:40.779771 kubelet[3123]: E0120 01:54:40.779588 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:54:40.826473 containerd[1643]: time="2026-01-20T01:54:40.823835031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hknvv,Uid:05750e4a-a6e9-4631-9a1f-786fc076da7e,Namespace:kube-system,Attempt:0,}" Jan 20 01:54:40.835383 kubelet[3123]: E0120 01:54:40.835339 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:54:41.835402 containerd[1643]: time="2026-01-20T01:54:41.822018468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d7bc7b4f-k5t2j,Uid:68cbc571-4445-4166-912c-8fdfe252aae2,Namespace:calico-system,Attempt:0,}" Jan 20 01:54:42.855591 containerd[1643]: time="2026-01-20T01:54:42.855474260Z" level=error msg="Failed to destroy network for sandbox \"89a6b28993d1308230dfed88a5380be6f58f5cc7369073fc472955c7b55c2207\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:42.942595 containerd[1643]: time="2026-01-20T01:54:42.895066466Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hknvv,Uid:05750e4a-a6e9-4631-9a1f-786fc076da7e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"89a6b28993d1308230dfed88a5380be6f58f5cc7369073fc472955c7b55c2207\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:42.944809 kubelet[3123]: E0120 01:54:42.896351 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89a6b28993d1308230dfed88a5380be6f58f5cc7369073fc472955c7b55c2207\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:42.944809 kubelet[3123]: E0120 01:54:42.934477 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89a6b28993d1308230dfed88a5380be6f58f5cc7369073fc472955c7b55c2207\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hknvv" Jan 20 01:54:42.944809 kubelet[3123]: E0120 01:54:42.934539 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89a6b28993d1308230dfed88a5380be6f58f5cc7369073fc472955c7b55c2207\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hknvv" Jan 20 01:54:42.975562 kubelet[3123]: E0120 01:54:42.934626 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-hknvv_kube-system(05750e4a-a6e9-4631-9a1f-786fc076da7e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-hknvv_kube-system(05750e4a-a6e9-4631-9a1f-786fc076da7e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89a6b28993d1308230dfed88a5380be6f58f5cc7369073fc472955c7b55c2207\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-hknvv" podUID="05750e4a-a6e9-4631-9a1f-786fc076da7e" Jan 20 01:54:42.997645 systemd[1]: run-netns-cni\x2d84c96cf9\x2df200\x2dbde9\x2dedc0\x2d0bc35d43ced9.mount: Deactivated successfully. Jan 20 01:54:43.719257 containerd[1643]: time="2026-01-20T01:54:43.717892994Z" level=error msg="Failed to destroy network for sandbox \"cf49ffeb0dcd4c2c1c9791d4c13cbb732cfa4d032dc6bd25e682ba41435b8e3a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:43.755833 systemd[1]: run-netns-cni\x2dc11c4e9f\x2db59b\x2d216a\x2dda24\x2d19d4dde6ad29.mount: Deactivated successfully. Jan 20 01:54:43.801832 kubelet[3123]: E0120 01:54:43.797312 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:54:43.803117 containerd[1643]: time="2026-01-20T01:54:43.802622189Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d7bc7b4f-k5t2j,Uid:68cbc571-4445-4166-912c-8fdfe252aae2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf49ffeb0dcd4c2c1c9791d4c13cbb732cfa4d032dc6bd25e682ba41435b8e3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:43.824065 kubelet[3123]: E0120 01:54:43.821088 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf49ffeb0dcd4c2c1c9791d4c13cbb732cfa4d032dc6bd25e682ba41435b8e3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:43.824065 kubelet[3123]: E0120 01:54:43.821235 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf49ffeb0dcd4c2c1c9791d4c13cbb732cfa4d032dc6bd25e682ba41435b8e3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" Jan 20 01:54:43.824065 kubelet[3123]: E0120 01:54:43.821281 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf49ffeb0dcd4c2c1c9791d4c13cbb732cfa4d032dc6bd25e682ba41435b8e3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" Jan 20 01:54:43.824378 kubelet[3123]: E0120 01:54:43.821369 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86d7bc7b4f-k5t2j_calico-system(68cbc571-4445-4166-912c-8fdfe252aae2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86d7bc7b4f-k5t2j_calico-system(68cbc571-4445-4166-912c-8fdfe252aae2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf49ffeb0dcd4c2c1c9791d4c13cbb732cfa4d032dc6bd25e682ba41435b8e3a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 01:54:50.025465 kubelet[3123]: E0120 01:54:50.015079 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:54:50.116497 containerd[1643]: time="2026-01-20T01:54:50.042518743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ft2sl,Uid:dce8f61b-70a0-47ff-b7a3-9a49a15c7261,Namespace:kube-system,Attempt:0,}" Jan 20 01:54:50.271457 kubelet[3123]: E0120 01:54:50.262463 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:54:51.547816 containerd[1643]: time="2026-01-20T01:54:51.544145173Z" level=error msg="Failed to destroy network for sandbox \"325de750929b36147b044a474014f0a7cf857b021325028ecc41989331eac49a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:51.559494 systemd[1]: run-netns-cni\x2dc221902c\x2dcaaf\x2d408d\x2d995f\x2d54a3c7629646.mount: Deactivated successfully. Jan 20 01:54:51.887038 containerd[1643]: time="2026-01-20T01:54:51.886949936Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ft2sl,Uid:dce8f61b-70a0-47ff-b7a3-9a49a15c7261,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"325de750929b36147b044a474014f0a7cf857b021325028ecc41989331eac49a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:51.903006 kubelet[3123]: E0120 01:54:51.900152 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"325de750929b36147b044a474014f0a7cf857b021325028ecc41989331eac49a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:51.903006 kubelet[3123]: E0120 01:54:51.900248 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"325de750929b36147b044a474014f0a7cf857b021325028ecc41989331eac49a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ft2sl" Jan 20 01:54:51.903006 kubelet[3123]: E0120 01:54:51.900282 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"325de750929b36147b044a474014f0a7cf857b021325028ecc41989331eac49a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ft2sl" Jan 20 01:54:51.904398 kubelet[3123]: E0120 01:54:51.900351 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-ft2sl_kube-system(dce8f61b-70a0-47ff-b7a3-9a49a15c7261)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-ft2sl_kube-system(dce8f61b-70a0-47ff-b7a3-9a49a15c7261)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"325de750929b36147b044a474014f0a7cf857b021325028ecc41989331eac49a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-ft2sl" podUID="dce8f61b-70a0-47ff-b7a3-9a49a15c7261" Jan 20 01:54:51.912270 containerd[1643]: time="2026-01-20T01:54:51.909059413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-r7ptx,Uid:03f653dd-0210-41e9-9d70-a3905826baa1,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:54:51.912270 containerd[1643]: time="2026-01-20T01:54:51.909482036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x6f5h,Uid:eeb09d5e-8a63-4fca-910b-ea49fa1ecf05,Namespace:calico-system,Attempt:0,}" Jan 20 01:54:51.959025 containerd[1643]: time="2026-01-20T01:54:51.958964251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-szpvj,Uid:285811f9-e547-431f-a7b0-90e1226d2f4d,Namespace:calico-system,Attempt:0,}" Jan 20 01:54:52.844895 containerd[1643]: time="2026-01-20T01:54:52.836086866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6455dcb75d-z7fp4,Uid:73120413-751d-4a6a-a82b-54ccc2e8bc99,Namespace:calico-system,Attempt:0,}" Jan 20 01:54:53.790662 kubelet[3123]: E0120 01:54:53.776190 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:54:53.927127 containerd[1643]: time="2026-01-20T01:54:53.804264334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-wbvft,Uid:589f656f-1e0a-4667-bc0d-42908aab3340,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:54:53.927127 containerd[1643]: time="2026-01-20T01:54:53.903324826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hknvv,Uid:05750e4a-a6e9-4631-9a1f-786fc076da7e,Namespace:kube-system,Attempt:0,}" Jan 20 01:54:54.862108 containerd[1643]: time="2026-01-20T01:54:54.862036412Z" level=error msg="Failed to destroy network for sandbox \"b604514534072a7e9410fad1f4ddeb36b54ef158567645fc87c60404d7840303\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:54.888138 systemd[1]: run-netns-cni\x2d0a167efc\x2d3892\x2d0f87\x2d9c35\x2d949285329360.mount: Deactivated successfully. Jan 20 01:54:54.953220 containerd[1643]: time="2026-01-20T01:54:54.953081686Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-r7ptx,Uid:03f653dd-0210-41e9-9d70-a3905826baa1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b604514534072a7e9410fad1f4ddeb36b54ef158567645fc87c60404d7840303\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:54.963299 kubelet[3123]: E0120 01:54:54.963126 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b604514534072a7e9410fad1f4ddeb36b54ef158567645fc87c60404d7840303\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:54.972084 kubelet[3123]: E0120 01:54:54.970024 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b604514534072a7e9410fad1f4ddeb36b54ef158567645fc87c60404d7840303\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" Jan 20 01:54:54.975996 kubelet[3123]: E0120 01:54:54.974311 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b604514534072a7e9410fad1f4ddeb36b54ef158567645fc87c60404d7840303\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" Jan 20 01:54:54.991785 kubelet[3123]: E0120 01:54:54.977030 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74b798b596-r7ptx_calico-apiserver(03f653dd-0210-41e9-9d70-a3905826baa1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74b798b596-r7ptx_calico-apiserver(03f653dd-0210-41e9-9d70-a3905826baa1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b604514534072a7e9410fad1f4ddeb36b54ef158567645fc87c60404d7840303\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 01:54:55.414818 containerd[1643]: time="2026-01-20T01:54:55.404603103Z" level=error msg="Failed to destroy network for sandbox \"37eaeafb79ea0d133ffde9df4674e899233fbb686fe95e801c52e3b84eabe664\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:55.465669 systemd[1]: run-netns-cni\x2d3046712c\x2dab61\x2d83da\x2daa11\x2d70fdb1a99694.mount: Deactivated successfully. Jan 20 01:54:55.563174 containerd[1643]: time="2026-01-20T01:54:55.562247081Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6455dcb75d-z7fp4,Uid:73120413-751d-4a6a-a82b-54ccc2e8bc99,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"37eaeafb79ea0d133ffde9df4674e899233fbb686fe95e801c52e3b84eabe664\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:55.563996 kubelet[3123]: E0120 01:54:55.563948 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37eaeafb79ea0d133ffde9df4674e899233fbb686fe95e801c52e3b84eabe664\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:55.567601 kubelet[3123]: E0120 01:54:55.564541 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37eaeafb79ea0d133ffde9df4674e899233fbb686fe95e801c52e3b84eabe664\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6455dcb75d-z7fp4" Jan 20 01:54:55.567961 kubelet[3123]: E0120 01:54:55.567920 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37eaeafb79ea0d133ffde9df4674e899233fbb686fe95e801c52e3b84eabe664\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6455dcb75d-z7fp4" Jan 20 01:54:55.568194 kubelet[3123]: E0120 01:54:55.568146 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6455dcb75d-z7fp4_calico-system(73120413-751d-4a6a-a82b-54ccc2e8bc99)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6455dcb75d-z7fp4_calico-system(73120413-751d-4a6a-a82b-54ccc2e8bc99)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37eaeafb79ea0d133ffde9df4674e899233fbb686fe95e801c52e3b84eabe664\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6455dcb75d-z7fp4" podUID="73120413-751d-4a6a-a82b-54ccc2e8bc99" Jan 20 01:54:55.586146 containerd[1643]: time="2026-01-20T01:54:55.586069906Z" level=error msg="Failed to destroy network for sandbox \"2c31657837eb85fe706808eda3f1cf189e5f50b1acff3a179c2f70b1c3d69737\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:55.643621 systemd[1]: run-netns-cni\x2d77c0235b\x2de683\x2d2d55\x2def6f\x2d4b3c55bfa224.mount: Deactivated successfully. Jan 20 01:54:55.749518 containerd[1643]: time="2026-01-20T01:54:55.653915161Z" level=error msg="Failed to destroy network for sandbox \"500603f033649dc8b97ead8989b19b6cc78b79725326ccec7f793b62766f2574\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:55.718576 systemd[1]: run-netns-cni\x2d035dbf81\x2d5ab8\x2d3b23\x2d1759\x2d2974ef20e740.mount: Deactivated successfully. Jan 20 01:54:55.813786 containerd[1643]: time="2026-01-20T01:54:55.797778754Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-wbvft,Uid:589f656f-1e0a-4667-bc0d-42908aab3340,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c31657837eb85fe706808eda3f1cf189e5f50b1acff3a179c2f70b1c3d69737\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:55.879408 kubelet[3123]: E0120 01:54:55.879280 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c31657837eb85fe706808eda3f1cf189e5f50b1acff3a179c2f70b1c3d69737\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:55.992820 kubelet[3123]: E0120 01:54:55.879522 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c31657837eb85fe706808eda3f1cf189e5f50b1acff3a179c2f70b1c3d69737\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" Jan 20 01:54:55.992820 kubelet[3123]: E0120 01:54:55.879566 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c31657837eb85fe706808eda3f1cf189e5f50b1acff3a179c2f70b1c3d69737\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" Jan 20 01:54:55.992820 kubelet[3123]: E0120 01:54:55.879641 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74b798b596-wbvft_calico-apiserver(589f656f-1e0a-4667-bc0d-42908aab3340)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74b798b596-wbvft_calico-apiserver(589f656f-1e0a-4667-bc0d-42908aab3340)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c31657837eb85fe706808eda3f1cf189e5f50b1acff3a179c2f70b1c3d69737\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 01:54:55.993636 containerd[1643]: time="2026-01-20T01:54:55.959631845Z" level=error msg="Failed to destroy network for sandbox \"0a485d8cf3b1c7a178c48dc4d5d2561233d10ec493b75d95dd75946765a727f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:55.993636 containerd[1643]: time="2026-01-20T01:54:55.962622876Z" level=info msg="container event discarded" container=a77cec3873f66e75b23bf3fc1708f1e973d303346844feae7806517992673522 type=CONTAINER_CREATED_EVENT Jan 20 01:54:55.993636 containerd[1643]: time="2026-01-20T01:54:55.962947571Z" level=info msg="container event discarded" container=a77cec3873f66e75b23bf3fc1708f1e973d303346844feae7806517992673522 type=CONTAINER_STARTED_EVENT Jan 20 01:54:56.042896 containerd[1643]: time="2026-01-20T01:54:56.035124241Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x6f5h,Uid:eeb09d5e-8a63-4fca-910b-ea49fa1ecf05,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"500603f033649dc8b97ead8989b19b6cc78b79725326ccec7f793b62766f2574\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:56.047912 systemd[1]: run-netns-cni\x2de254dc1e\x2da9cc\x2d20d5\x2d6344\x2d133aa23abaee.mount: Deactivated successfully. Jan 20 01:54:56.118865 kubelet[3123]: E0120 01:54:56.045254 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"500603f033649dc8b97ead8989b19b6cc78b79725326ccec7f793b62766f2574\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:56.118865 kubelet[3123]: E0120 01:54:56.114576 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"500603f033649dc8b97ead8989b19b6cc78b79725326ccec7f793b62766f2574\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x6f5h" Jan 20 01:54:56.118865 kubelet[3123]: E0120 01:54:56.114616 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"500603f033649dc8b97ead8989b19b6cc78b79725326ccec7f793b62766f2574\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x6f5h" Jan 20 01:54:56.119122 kubelet[3123]: E0120 01:54:56.114792 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x6f5h_calico-system(eeb09d5e-8a63-4fca-910b-ea49fa1ecf05)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x6f5h_calico-system(eeb09d5e-8a63-4fca-910b-ea49fa1ecf05)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"500603f033649dc8b97ead8989b19b6cc78b79725326ccec7f793b62766f2574\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:54:56.259210 containerd[1643]: time="2026-01-20T01:54:56.254096551Z" level=error msg="Failed to destroy network for sandbox \"6a7584748df08308e75fc6db4450654edd57c8bdf9628dc6a7fbfa1e22934fde\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:56.297846 systemd[1]: run-netns-cni\x2d5f879365\x2d82a7\x2dbbc6\x2dfd1c\x2d6f1849d4daf2.mount: Deactivated successfully. Jan 20 01:54:56.299617 containerd[1643]: time="2026-01-20T01:54:56.299553029Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-szpvj,Uid:285811f9-e547-431f-a7b0-90e1226d2f4d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a485d8cf3b1c7a178c48dc4d5d2561233d10ec493b75d95dd75946765a727f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:56.362794 kubelet[3123]: E0120 01:54:56.362217 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a485d8cf3b1c7a178c48dc4d5d2561233d10ec493b75d95dd75946765a727f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:56.362794 kubelet[3123]: E0120 01:54:56.362380 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a485d8cf3b1c7a178c48dc4d5d2561233d10ec493b75d95dd75946765a727f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-szpvj" Jan 20 01:54:56.362794 kubelet[3123]: E0120 01:54:56.362413 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a485d8cf3b1c7a178c48dc4d5d2561233d10ec493b75d95dd75946765a727f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-szpvj" Jan 20 01:54:56.363079 kubelet[3123]: E0120 01:54:56.362478 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-szpvj_calico-system(285811f9-e547-431f-a7b0-90e1226d2f4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-szpvj_calico-system(285811f9-e547-431f-a7b0-90e1226d2f4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a485d8cf3b1c7a178c48dc4d5d2561233d10ec493b75d95dd75946765a727f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 01:54:56.481164 containerd[1643]: time="2026-01-20T01:54:56.480997560Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hknvv,Uid:05750e4a-a6e9-4631-9a1f-786fc076da7e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a7584748df08308e75fc6db4450654edd57c8bdf9628dc6a7fbfa1e22934fde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:56.482341 kubelet[3123]: E0120 01:54:56.481661 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a7584748df08308e75fc6db4450654edd57c8bdf9628dc6a7fbfa1e22934fde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:56.482341 kubelet[3123]: E0120 01:54:56.481889 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a7584748df08308e75fc6db4450654edd57c8bdf9628dc6a7fbfa1e22934fde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hknvv" Jan 20 01:54:56.482341 kubelet[3123]: E0120 01:54:56.481919 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a7584748df08308e75fc6db4450654edd57c8bdf9628dc6a7fbfa1e22934fde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hknvv" Jan 20 01:54:56.482546 kubelet[3123]: E0120 01:54:56.481982 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-hknvv_kube-system(05750e4a-a6e9-4631-9a1f-786fc076da7e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-hknvv_kube-system(05750e4a-a6e9-4631-9a1f-786fc076da7e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a7584748df08308e75fc6db4450654edd57c8bdf9628dc6a7fbfa1e22934fde\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-hknvv" podUID="05750e4a-a6e9-4631-9a1f-786fc076da7e" Jan 20 01:54:56.811563 containerd[1643]: time="2026-01-20T01:54:56.810811331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d7bc7b4f-k5t2j,Uid:68cbc571-4445-4166-912c-8fdfe252aae2,Namespace:calico-system,Attempt:0,}" Jan 20 01:54:56.844753 containerd[1643]: time="2026-01-20T01:54:56.844210884Z" level=info msg="container event discarded" container=e33f8bc750e3a4f212a4568507107730692da338ec1fe7ce64e084d5645c77c7 type=CONTAINER_CREATED_EVENT Jan 20 01:54:58.450281 containerd[1643]: time="2026-01-20T01:54:58.438568769Z" level=error msg="Failed to destroy network for sandbox \"442cdfff03ca5df83d6beebd0864f20c80ae29816bcbc184de09c6a659a712ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:58.492142 systemd[1]: run-netns-cni\x2d357477b1\x2d82f7\x2d0e34\x2d695d\x2d9f9b68f243f5.mount: Deactivated successfully. Jan 20 01:54:58.564522 containerd[1643]: time="2026-01-20T01:54:58.548447986Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d7bc7b4f-k5t2j,Uid:68cbc571-4445-4166-912c-8fdfe252aae2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"442cdfff03ca5df83d6beebd0864f20c80ae29816bcbc184de09c6a659a712ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:58.564955 kubelet[3123]: E0120 01:54:58.549830 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"442cdfff03ca5df83d6beebd0864f20c80ae29816bcbc184de09c6a659a712ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:54:58.564955 kubelet[3123]: E0120 01:54:58.550039 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"442cdfff03ca5df83d6beebd0864f20c80ae29816bcbc184de09c6a659a712ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" Jan 20 01:54:58.564955 kubelet[3123]: E0120 01:54:58.550073 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"442cdfff03ca5df83d6beebd0864f20c80ae29816bcbc184de09c6a659a712ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" Jan 20 01:54:58.566315 kubelet[3123]: E0120 01:54:58.550148 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86d7bc7b4f-k5t2j_calico-system(68cbc571-4445-4166-912c-8fdfe252aae2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86d7bc7b4f-k5t2j_calico-system(68cbc571-4445-4166-912c-8fdfe252aae2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"442cdfff03ca5df83d6beebd0864f20c80ae29816bcbc184de09c6a659a712ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 01:55:00.398896 containerd[1643]: time="2026-01-20T01:55:00.398269751Z" level=info msg="container event discarded" container=e33f8bc750e3a4f212a4568507107730692da338ec1fe7ce64e084d5645c77c7 type=CONTAINER_STARTED_EVENT Jan 20 01:55:05.912476 kubelet[3123]: E0120 01:55:05.912290 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:55:05.980153 containerd[1643]: time="2026-01-20T01:55:05.926241700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ft2sl,Uid:dce8f61b-70a0-47ff-b7a3-9a49a15c7261,Namespace:kube-system,Attempt:0,}" Jan 20 01:55:06.919125 containerd[1643]: time="2026-01-20T01:55:06.914515636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-wbvft,Uid:589f656f-1e0a-4667-bc0d-42908aab3340,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:55:07.945029 containerd[1643]: time="2026-01-20T01:55:07.940408447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-r7ptx,Uid:03f653dd-0210-41e9-9d70-a3905826baa1,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:55:07.965088 containerd[1643]: time="2026-01-20T01:55:07.961416504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-szpvj,Uid:285811f9-e547-431f-a7b0-90e1226d2f4d,Namespace:calico-system,Attempt:0,}" Jan 20 01:55:08.852541 containerd[1643]: time="2026-01-20T01:55:08.852457902Z" level=info msg="container event discarded" container=eb8e35fa7303fe39aded908b411e4f668a84c2383ceaac749c50b482a095aeaa type=CONTAINER_CREATED_EVENT Jan 20 01:55:08.917086 containerd[1643]: time="2026-01-20T01:55:08.908311565Z" level=info msg="container event discarded" container=eb8e35fa7303fe39aded908b411e4f668a84c2383ceaac749c50b482a095aeaa type=CONTAINER_STARTED_EVENT Jan 20 01:55:08.917086 containerd[1643]: time="2026-01-20T01:55:08.878118998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x6f5h,Uid:eeb09d5e-8a63-4fca-910b-ea49fa1ecf05,Namespace:calico-system,Attempt:0,}" Jan 20 01:55:08.937130 containerd[1643]: time="2026-01-20T01:55:08.878243324Z" level=error msg="Failed to destroy network for sandbox \"4bcd697ff0ca19130e2fe41d1afe01f3c8f1d102633682bb34bad7d8ae0f5c39\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:08.982246 containerd[1643]: time="2026-01-20T01:55:08.982188930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6455dcb75d-z7fp4,Uid:73120413-751d-4a6a-a82b-54ccc2e8bc99,Namespace:calico-system,Attempt:0,}" Jan 20 01:55:08.983073 systemd[1]: run-netns-cni\x2dcb484a4d\x2df36c\x2d9558\x2d219e\x2deaa90c627c86.mount: Deactivated successfully. Jan 20 01:55:09.225255 containerd[1643]: time="2026-01-20T01:55:09.225023965Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ft2sl,Uid:dce8f61b-70a0-47ff-b7a3-9a49a15c7261,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bcd697ff0ca19130e2fe41d1afe01f3c8f1d102633682bb34bad7d8ae0f5c39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:09.228879 kubelet[3123]: E0120 01:55:09.227971 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bcd697ff0ca19130e2fe41d1afe01f3c8f1d102633682bb34bad7d8ae0f5c39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:09.228879 kubelet[3123]: E0120 01:55:09.228079 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bcd697ff0ca19130e2fe41d1afe01f3c8f1d102633682bb34bad7d8ae0f5c39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ft2sl" Jan 20 01:55:09.228879 kubelet[3123]: E0120 01:55:09.228114 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bcd697ff0ca19130e2fe41d1afe01f3c8f1d102633682bb34bad7d8ae0f5c39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ft2sl" Jan 20 01:55:09.264913 kubelet[3123]: E0120 01:55:09.228180 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-ft2sl_kube-system(dce8f61b-70a0-47ff-b7a3-9a49a15c7261)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-ft2sl_kube-system(dce8f61b-70a0-47ff-b7a3-9a49a15c7261)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4bcd697ff0ca19130e2fe41d1afe01f3c8f1d102633682bb34bad7d8ae0f5c39\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-ft2sl" podUID="dce8f61b-70a0-47ff-b7a3-9a49a15c7261" Jan 20 01:55:09.911909 containerd[1643]: time="2026-01-20T01:55:09.911838684Z" level=error msg="Failed to destroy network for sandbox \"e4639620221900baaffcd917c8205b156a55beb7e8a610d754838028c7a5ba1f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:09.947088 kubelet[3123]: E0120 01:55:09.945640 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:55:09.955123 systemd[1]: run-netns-cni\x2d25a4a04f\x2d4dca\x2d3dba\x2de56b\x2d82049a586a98.mount: Deactivated successfully. Jan 20 01:55:09.994616 containerd[1643]: time="2026-01-20T01:55:09.994561109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hknvv,Uid:05750e4a-a6e9-4631-9a1f-786fc076da7e,Namespace:kube-system,Attempt:0,}" Jan 20 01:55:10.203880 containerd[1643]: time="2026-01-20T01:55:10.190353648Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-wbvft,Uid:589f656f-1e0a-4667-bc0d-42908aab3340,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4639620221900baaffcd917c8205b156a55beb7e8a610d754838028c7a5ba1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:10.236101 kubelet[3123]: E0120 01:55:10.231480 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4639620221900baaffcd917c8205b156a55beb7e8a610d754838028c7a5ba1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:10.238020 kubelet[3123]: E0120 01:55:10.237584 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4639620221900baaffcd917c8205b156a55beb7e8a610d754838028c7a5ba1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" Jan 20 01:55:10.246125 kubelet[3123]: E0120 01:55:10.238533 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4639620221900baaffcd917c8205b156a55beb7e8a610d754838028c7a5ba1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" Jan 20 01:55:10.264532 kubelet[3123]: E0120 01:55:10.255661 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74b798b596-wbvft_calico-apiserver(589f656f-1e0a-4667-bc0d-42908aab3340)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74b798b596-wbvft_calico-apiserver(589f656f-1e0a-4667-bc0d-42908aab3340)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4639620221900baaffcd917c8205b156a55beb7e8a610d754838028c7a5ba1f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 01:55:10.581089 containerd[1643]: time="2026-01-20T01:55:10.581018789Z" level=error msg="Failed to destroy network for sandbox \"711a7fc2f96428a70ddb79e91b8dc85c47d5114dcc98c00e81bc077d1125cdbe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:10.661033 systemd[1]: run-netns-cni\x2d4ff0ef1b\x2dda0e\x2d1781\x2d7949\x2d1059689626ba.mount: Deactivated successfully. Jan 20 01:55:10.753970 containerd[1643]: time="2026-01-20T01:55:10.753896674Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-szpvj,Uid:285811f9-e547-431f-a7b0-90e1226d2f4d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"711a7fc2f96428a70ddb79e91b8dc85c47d5114dcc98c00e81bc077d1125cdbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:10.764942 kubelet[3123]: E0120 01:55:10.764872 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"711a7fc2f96428a70ddb79e91b8dc85c47d5114dcc98c00e81bc077d1125cdbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:10.765790 kubelet[3123]: E0120 01:55:10.765219 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"711a7fc2f96428a70ddb79e91b8dc85c47d5114dcc98c00e81bc077d1125cdbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-szpvj" Jan 20 01:55:10.765790 kubelet[3123]: E0120 01:55:10.765269 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"711a7fc2f96428a70ddb79e91b8dc85c47d5114dcc98c00e81bc077d1125cdbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-szpvj" Jan 20 01:55:10.765790 kubelet[3123]: E0120 01:55:10.765350 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-szpvj_calico-system(285811f9-e547-431f-a7b0-90e1226d2f4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-szpvj_calico-system(285811f9-e547-431f-a7b0-90e1226d2f4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"711a7fc2f96428a70ddb79e91b8dc85c47d5114dcc98c00e81bc077d1125cdbe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 01:55:11.534853 containerd[1643]: time="2026-01-20T01:55:11.519533882Z" level=error msg="Failed to destroy network for sandbox \"7df2b885e13a65a8237ca6277ff6443877546346602d7b6df90f481561115bc8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:11.873199 containerd[1643]: time="2026-01-20T01:55:11.872602244Z" level=error msg="Failed to destroy network for sandbox \"3660b91a21f39600655957c0973f4d0d621b2c1c27636b005149981fa1e58487\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:11.881292 containerd[1643]: time="2026-01-20T01:55:11.877988715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d7bc7b4f-k5t2j,Uid:68cbc571-4445-4166-912c-8fdfe252aae2,Namespace:calico-system,Attempt:0,}" Jan 20 01:55:11.956199 systemd[1]: run-netns-cni\x2d28df479a\x2dadda\x2d51fc\x2dabac\x2dc33fbe265ae6.mount: Deactivated successfully. Jan 20 01:55:12.039987 systemd[1]: run-netns-cni\x2daf64d800\x2dfcbc\x2dd690\x2d8517\x2df1f5d4380d20.mount: Deactivated successfully. Jan 20 01:55:12.140244 containerd[1643]: time="2026-01-20T01:55:12.138391076Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-r7ptx,Uid:03f653dd-0210-41e9-9d70-a3905826baa1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3660b91a21f39600655957c0973f4d0d621b2c1c27636b005149981fa1e58487\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:12.156237 kubelet[3123]: E0120 01:55:12.152020 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3660b91a21f39600655957c0973f4d0d621b2c1c27636b005149981fa1e58487\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:12.156237 kubelet[3123]: E0120 01:55:12.152111 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3660b91a21f39600655957c0973f4d0d621b2c1c27636b005149981fa1e58487\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" Jan 20 01:55:12.156237 kubelet[3123]: E0120 01:55:12.152145 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3660b91a21f39600655957c0973f4d0d621b2c1c27636b005149981fa1e58487\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" Jan 20 01:55:12.157168 kubelet[3123]: E0120 01:55:12.152221 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74b798b596-r7ptx_calico-apiserver(03f653dd-0210-41e9-9d70-a3905826baa1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74b798b596-r7ptx_calico-apiserver(03f653dd-0210-41e9-9d70-a3905826baa1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3660b91a21f39600655957c0973f4d0d621b2c1c27636b005149981fa1e58487\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 01:55:12.198233 containerd[1643]: time="2026-01-20T01:55:12.188354101Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x6f5h,Uid:eeb09d5e-8a63-4fca-910b-ea49fa1ecf05,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7df2b885e13a65a8237ca6277ff6443877546346602d7b6df90f481561115bc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:12.312086 kubelet[3123]: E0120 01:55:12.305947 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7df2b885e13a65a8237ca6277ff6443877546346602d7b6df90f481561115bc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:12.312086 kubelet[3123]: E0120 01:55:12.306078 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7df2b885e13a65a8237ca6277ff6443877546346602d7b6df90f481561115bc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x6f5h" Jan 20 01:55:12.312086 kubelet[3123]: E0120 01:55:12.306112 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7df2b885e13a65a8237ca6277ff6443877546346602d7b6df90f481561115bc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x6f5h" Jan 20 01:55:12.312362 kubelet[3123]: E0120 01:55:12.306179 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x6f5h_calico-system(eeb09d5e-8a63-4fca-910b-ea49fa1ecf05)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x6f5h_calico-system(eeb09d5e-8a63-4fca-910b-ea49fa1ecf05)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7df2b885e13a65a8237ca6277ff6443877546346602d7b6df90f481561115bc8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:55:12.392788 containerd[1643]: time="2026-01-20T01:55:12.391467128Z" level=error msg="Failed to destroy network for sandbox \"e139b16db8197ccff7bf4fb1349d7c3d9efffb00b75f2c22ac5c2c975f5e0f34\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:12.507077 systemd[1]: run-netns-cni\x2ddf99b7ba\x2d10b3\x2dec6e\x2d6679\x2d47cddaa548a8.mount: Deactivated successfully. Jan 20 01:55:12.644570 containerd[1643]: time="2026-01-20T01:55:12.636050056Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6455dcb75d-z7fp4,Uid:73120413-751d-4a6a-a82b-54ccc2e8bc99,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e139b16db8197ccff7bf4fb1349d7c3d9efffb00b75f2c22ac5c2c975f5e0f34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:12.645398 kubelet[3123]: E0120 01:55:12.641021 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e139b16db8197ccff7bf4fb1349d7c3d9efffb00b75f2c22ac5c2c975f5e0f34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:12.645398 kubelet[3123]: E0120 01:55:12.641112 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e139b16db8197ccff7bf4fb1349d7c3d9efffb00b75f2c22ac5c2c975f5e0f34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6455dcb75d-z7fp4" Jan 20 01:55:12.645398 kubelet[3123]: E0120 01:55:12.641144 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e139b16db8197ccff7bf4fb1349d7c3d9efffb00b75f2c22ac5c2c975f5e0f34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6455dcb75d-z7fp4" Jan 20 01:55:12.645535 kubelet[3123]: E0120 01:55:12.641230 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6455dcb75d-z7fp4_calico-system(73120413-751d-4a6a-a82b-54ccc2e8bc99)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6455dcb75d-z7fp4_calico-system(73120413-751d-4a6a-a82b-54ccc2e8bc99)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e139b16db8197ccff7bf4fb1349d7c3d9efffb00b75f2c22ac5c2c975f5e0f34\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6455dcb75d-z7fp4" podUID="73120413-751d-4a6a-a82b-54ccc2e8bc99" Jan 20 01:55:12.938081 containerd[1643]: time="2026-01-20T01:55:12.937000210Z" level=error msg="Failed to destroy network for sandbox \"3ab73c92de22de3cb437a60ceab2db893ef700397f27c8f36f93daa6fda1b07d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:12.954526 systemd[1]: run-netns-cni\x2d2f9e155d\x2dcb47\x2d4f78\x2d0f61\x2d6187894f9902.mount: Deactivated successfully. Jan 20 01:55:12.967421 containerd[1643]: time="2026-01-20T01:55:12.967347701Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hknvv,Uid:05750e4a-a6e9-4631-9a1f-786fc076da7e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ab73c92de22de3cb437a60ceab2db893ef700397f27c8f36f93daa6fda1b07d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:12.976544 kubelet[3123]: E0120 01:55:12.976398 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ab73c92de22de3cb437a60ceab2db893ef700397f27c8f36f93daa6fda1b07d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:12.976544 kubelet[3123]: E0120 01:55:12.976502 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ab73c92de22de3cb437a60ceab2db893ef700397f27c8f36f93daa6fda1b07d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hknvv" Jan 20 01:55:12.988912 kubelet[3123]: E0120 01:55:12.976537 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ab73c92de22de3cb437a60ceab2db893ef700397f27c8f36f93daa6fda1b07d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hknvv" Jan 20 01:55:12.988912 kubelet[3123]: E0120 01:55:12.976823 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-hknvv_kube-system(05750e4a-a6e9-4631-9a1f-786fc076da7e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-hknvv_kube-system(05750e4a-a6e9-4631-9a1f-786fc076da7e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ab73c92de22de3cb437a60ceab2db893ef700397f27c8f36f93daa6fda1b07d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-hknvv" podUID="05750e4a-a6e9-4631-9a1f-786fc076da7e" Jan 20 01:55:13.449876 containerd[1643]: time="2026-01-20T01:55:13.445347713Z" level=error msg="Failed to destroy network for sandbox \"be063780805994b2987b22808c48ddbeff02b883071fa36078341f260afe8943\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:13.457511 systemd[1]: run-netns-cni\x2df6bf9884\x2d83e7\x2d8303\x2d2d8d\x2d426991816392.mount: Deactivated successfully. Jan 20 01:55:13.509042 containerd[1643]: time="2026-01-20T01:55:13.508440944Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d7bc7b4f-k5t2j,Uid:68cbc571-4445-4166-912c-8fdfe252aae2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"be063780805994b2987b22808c48ddbeff02b883071fa36078341f260afe8943\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:13.534376 kubelet[3123]: E0120 01:55:13.534308 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be063780805994b2987b22808c48ddbeff02b883071fa36078341f260afe8943\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:13.547026 kubelet[3123]: E0120 01:55:13.538857 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be063780805994b2987b22808c48ddbeff02b883071fa36078341f260afe8943\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" Jan 20 01:55:13.547026 kubelet[3123]: E0120 01:55:13.538921 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be063780805994b2987b22808c48ddbeff02b883071fa36078341f260afe8943\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" Jan 20 01:55:13.552535 kubelet[3123]: E0120 01:55:13.552430 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86d7bc7b4f-k5t2j_calico-system(68cbc571-4445-4166-912c-8fdfe252aae2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86d7bc7b4f-k5t2j_calico-system(68cbc571-4445-4166-912c-8fdfe252aae2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be063780805994b2987b22808c48ddbeff02b883071fa36078341f260afe8943\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 01:55:19.942621 kubelet[3123]: E0120 01:55:19.942138 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:55:19.954144 containerd[1643]: time="2026-01-20T01:55:19.947588893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ft2sl,Uid:dce8f61b-70a0-47ff-b7a3-9a49a15c7261,Namespace:kube-system,Attempt:0,}" Jan 20 01:55:20.935138 containerd[1643]: time="2026-01-20T01:55:20.920082516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-szpvj,Uid:285811f9-e547-431f-a7b0-90e1226d2f4d,Namespace:calico-system,Attempt:0,}" Jan 20 01:55:23.706129 containerd[1643]: time="2026-01-20T01:55:23.706062314Z" level=error msg="Failed to destroy network for sandbox \"4d11e9640e59eb6b9aa8b02190430a6a30e4d78859136253d029f1e344b1b8d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:23.751216 systemd[1]: run-netns-cni\x2da2214fe6\x2d6ece\x2d0cb0\x2dabe7\x2df95336b773af.mount: Deactivated successfully. Jan 20 01:55:23.765259 containerd[1643]: time="2026-01-20T01:55:23.762001545Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-szpvj,Uid:285811f9-e547-431f-a7b0-90e1226d2f4d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d11e9640e59eb6b9aa8b02190430a6a30e4d78859136253d029f1e344b1b8d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:23.765655 kubelet[3123]: E0120 01:55:23.765466 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d11e9640e59eb6b9aa8b02190430a6a30e4d78859136253d029f1e344b1b8d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:23.777877 kubelet[3123]: E0120 01:55:23.765649 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d11e9640e59eb6b9aa8b02190430a6a30e4d78859136253d029f1e344b1b8d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-szpvj" Jan 20 01:55:23.777877 kubelet[3123]: E0120 01:55:23.765816 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d11e9640e59eb6b9aa8b02190430a6a30e4d78859136253d029f1e344b1b8d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-szpvj" Jan 20 01:55:23.777877 kubelet[3123]: E0120 01:55:23.766045 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-szpvj_calico-system(285811f9-e547-431f-a7b0-90e1226d2f4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-szpvj_calico-system(285811f9-e547-431f-a7b0-90e1226d2f4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d11e9640e59eb6b9aa8b02190430a6a30e4d78859136253d029f1e344b1b8d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 01:55:24.491803 containerd[1643]: time="2026-01-20T01:55:24.465987471Z" level=error msg="Failed to destroy network for sandbox \"2931cb21714ff570da81b46bcef8d4ff4d4cba74f9a1d9d55d780aa1c3cccf5e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:24.556803 systemd[1]: run-netns-cni\x2dfcb17443\x2d1bf6\x2d660c\x2d76b2\x2d32e7e3b983ae.mount: Deactivated successfully. Jan 20 01:55:24.596417 containerd[1643]: time="2026-01-20T01:55:24.584924074Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ft2sl,Uid:dce8f61b-70a0-47ff-b7a3-9a49a15c7261,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2931cb21714ff570da81b46bcef8d4ff4d4cba74f9a1d9d55d780aa1c3cccf5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:24.596848 kubelet[3123]: E0120 01:55:24.585872 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2931cb21714ff570da81b46bcef8d4ff4d4cba74f9a1d9d55d780aa1c3cccf5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:24.596848 kubelet[3123]: E0120 01:55:24.586058 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2931cb21714ff570da81b46bcef8d4ff4d4cba74f9a1d9d55d780aa1c3cccf5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ft2sl" Jan 20 01:55:24.596848 kubelet[3123]: E0120 01:55:24.592140 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2931cb21714ff570da81b46bcef8d4ff4d4cba74f9a1d9d55d780aa1c3cccf5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ft2sl" Jan 20 01:55:24.597086 kubelet[3123]: E0120 01:55:24.592224 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-ft2sl_kube-system(dce8f61b-70a0-47ff-b7a3-9a49a15c7261)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-ft2sl_kube-system(dce8f61b-70a0-47ff-b7a3-9a49a15c7261)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2931cb21714ff570da81b46bcef8d4ff4d4cba74f9a1d9d55d780aa1c3cccf5e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-ft2sl" podUID="dce8f61b-70a0-47ff-b7a3-9a49a15c7261" Jan 20 01:55:24.789138 kubelet[3123]: E0120 01:55:24.788999 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:55:24.797387 containerd[1643]: time="2026-01-20T01:55:24.792583429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hknvv,Uid:05750e4a-a6e9-4631-9a1f-786fc076da7e,Namespace:kube-system,Attempt:0,}" Jan 20 01:55:24.801858 containerd[1643]: time="2026-01-20T01:55:24.800080070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-r7ptx,Uid:03f653dd-0210-41e9-9d70-a3905826baa1,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:55:24.822649 containerd[1643]: time="2026-01-20T01:55:24.805095962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d7bc7b4f-k5t2j,Uid:68cbc571-4445-4166-912c-8fdfe252aae2,Namespace:calico-system,Attempt:0,}" Jan 20 01:55:24.822649 containerd[1643]: time="2026-01-20T01:55:24.805376586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-wbvft,Uid:589f656f-1e0a-4667-bc0d-42908aab3340,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:55:26.622645 containerd[1643]: time="2026-01-20T01:55:26.609563669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6455dcb75d-z7fp4,Uid:73120413-751d-4a6a-a82b-54ccc2e8bc99,Namespace:calico-system,Attempt:0,}" Jan 20 01:55:27.052057 containerd[1643]: time="2026-01-20T01:55:27.043600889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x6f5h,Uid:eeb09d5e-8a63-4fca-910b-ea49fa1ecf05,Namespace:calico-system,Attempt:0,}" Jan 20 01:55:29.196093 containerd[1643]: time="2026-01-20T01:55:29.157195322Z" level=info msg="container event discarded" container=0f3d0cdf67286344235ecb391494b9eee9bbacbe2aff2443da583a6b1066b146 type=CONTAINER_CREATED_EVENT Jan 20 01:55:30.779903 containerd[1643]: time="2026-01-20T01:55:30.779810522Z" level=info msg="container event discarded" container=0f3d0cdf67286344235ecb391494b9eee9bbacbe2aff2443da583a6b1066b146 type=CONTAINER_STARTED_EVENT Jan 20 01:55:30.845458 containerd[1643]: time="2026-01-20T01:55:30.835439210Z" level=error msg="Failed to destroy network for sandbox \"9fc85188ffe320ac79792c96f6d17e52ae66ee9c958d3de6db6850f0c8e616a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:31.103472 systemd[1]: run-netns-cni\x2d8e44356a\x2daf88\x2d9730\x2d0abf\x2d169954a27697.mount: Deactivated successfully. Jan 20 01:55:31.429644 containerd[1643]: time="2026-01-20T01:55:31.331651979Z" level=error msg="Failed to destroy network for sandbox \"0ecea8a047ab9b87197cf6720421a06169cb8408a91f01d7052a074c611bcaf7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:31.432017 systemd[1]: run-netns-cni\x2d0f1a5374\x2d08f9\x2dc57e\x2d4fc4\x2d883003d39c23.mount: Deactivated successfully. Jan 20 01:55:31.528469 systemd[1]: run-netns-cni\x2d007b66de\x2d9ca8\x2dfdb7\x2d8e93\x2d059477e8d78a.mount: Deactivated successfully. Jan 20 01:55:31.621876 kubelet[3123]: E0120 01:55:31.606039 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fc85188ffe320ac79792c96f6d17e52ae66ee9c958d3de6db6850f0c8e616a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:31.621876 kubelet[3123]: E0120 01:55:31.606126 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fc85188ffe320ac79792c96f6d17e52ae66ee9c958d3de6db6850f0c8e616a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hknvv" Jan 20 01:55:31.621876 kubelet[3123]: E0120 01:55:31.606154 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fc85188ffe320ac79792c96f6d17e52ae66ee9c958d3de6db6850f0c8e616a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hknvv" Jan 20 01:55:31.622985 containerd[1643]: time="2026-01-20T01:55:31.438080959Z" level=error msg="Failed to destroy network for sandbox \"918e0e7f25d39e975a58eb213d6fb3a029852d771406a8f0209c81787b3738e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:31.622985 containerd[1643]: time="2026-01-20T01:55:31.480161584Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hknvv,Uid:05750e4a-a6e9-4631-9a1f-786fc076da7e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fc85188ffe320ac79792c96f6d17e52ae66ee9c958d3de6db6850f0c8e616a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:31.625406 kubelet[3123]: E0120 01:55:31.606361 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-hknvv_kube-system(05750e4a-a6e9-4631-9a1f-786fc076da7e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-hknvv_kube-system(05750e4a-a6e9-4631-9a1f-786fc076da7e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9fc85188ffe320ac79792c96f6d17e52ae66ee9c958d3de6db6850f0c8e616a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-hknvv" podUID="05750e4a-a6e9-4631-9a1f-786fc076da7e" Jan 20 01:55:31.735578 containerd[1643]: time="2026-01-20T01:55:31.735035608Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d7bc7b4f-k5t2j,Uid:68cbc571-4445-4166-912c-8fdfe252aae2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ecea8a047ab9b87197cf6720421a06169cb8408a91f01d7052a074c611bcaf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:31.761606 kubelet[3123]: E0120 01:55:31.743941 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ecea8a047ab9b87197cf6720421a06169cb8408a91f01d7052a074c611bcaf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:31.761606 kubelet[3123]: E0120 01:55:31.744101 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ecea8a047ab9b87197cf6720421a06169cb8408a91f01d7052a074c611bcaf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" Jan 20 01:55:31.761606 kubelet[3123]: E0120 01:55:31.744141 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ecea8a047ab9b87197cf6720421a06169cb8408a91f01d7052a074c611bcaf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" Jan 20 01:55:31.762037 kubelet[3123]: E0120 01:55:31.758873 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86d7bc7b4f-k5t2j_calico-system(68cbc571-4445-4166-912c-8fdfe252aae2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86d7bc7b4f-k5t2j_calico-system(68cbc571-4445-4166-912c-8fdfe252aae2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ecea8a047ab9b87197cf6720421a06169cb8408a91f01d7052a074c611bcaf7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 01:55:31.918829 containerd[1643]: time="2026-01-20T01:55:31.918533638Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-r7ptx,Uid:03f653dd-0210-41e9-9d70-a3905826baa1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"918e0e7f25d39e975a58eb213d6fb3a029852d771406a8f0209c81787b3738e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:31.930451 kubelet[3123]: E0120 01:55:31.925814 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"918e0e7f25d39e975a58eb213d6fb3a029852d771406a8f0209c81787b3738e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:31.930451 kubelet[3123]: E0120 01:55:31.925953 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"918e0e7f25d39e975a58eb213d6fb3a029852d771406a8f0209c81787b3738e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" Jan 20 01:55:31.930451 kubelet[3123]: E0120 01:55:31.925985 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"918e0e7f25d39e975a58eb213d6fb3a029852d771406a8f0209c81787b3738e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" Jan 20 01:55:32.036977 kubelet[3123]: E0120 01:55:32.034029 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74b798b596-r7ptx_calico-apiserver(03f653dd-0210-41e9-9d70-a3905826baa1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74b798b596-r7ptx_calico-apiserver(03f653dd-0210-41e9-9d70-a3905826baa1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"918e0e7f25d39e975a58eb213d6fb3a029852d771406a8f0209c81787b3738e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 01:55:33.755904 containerd[1643]: time="2026-01-20T01:55:33.745322535Z" level=error msg="Failed to destroy network for sandbox \"31b909650d203383221bdd8ee9dfd565f356de6e54b433b0c270d42ac185d16e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:33.773062 systemd[1]: run-netns-cni\x2d3381c0a1\x2d04af\x2d6da7\x2db4b8\x2dc5eab61393ea.mount: Deactivated successfully. Jan 20 01:55:33.905861 containerd[1643]: time="2026-01-20T01:55:33.890397264Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-wbvft,Uid:589f656f-1e0a-4667-bc0d-42908aab3340,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"31b909650d203383221bdd8ee9dfd565f356de6e54b433b0c270d42ac185d16e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:33.945061 kubelet[3123]: E0120 01:55:33.934260 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31b909650d203383221bdd8ee9dfd565f356de6e54b433b0c270d42ac185d16e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:33.945061 kubelet[3123]: E0120 01:55:33.934422 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31b909650d203383221bdd8ee9dfd565f356de6e54b433b0c270d42ac185d16e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" Jan 20 01:55:33.945061 kubelet[3123]: E0120 01:55:33.934460 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31b909650d203383221bdd8ee9dfd565f356de6e54b433b0c270d42ac185d16e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" Jan 20 01:55:33.955609 kubelet[3123]: E0120 01:55:33.934526 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74b798b596-wbvft_calico-apiserver(589f656f-1e0a-4667-bc0d-42908aab3340)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74b798b596-wbvft_calico-apiserver(589f656f-1e0a-4667-bc0d-42908aab3340)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31b909650d203383221bdd8ee9dfd565f356de6e54b433b0c270d42ac185d16e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 01:55:34.180330 containerd[1643]: time="2026-01-20T01:55:34.174609421Z" level=error msg="Failed to destroy network for sandbox \"60005c4506d9310da51e828c8d501c5dca5acf3ee7f47a20316b0784660d8618\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:34.247316 systemd[1]: run-netns-cni\x2da0efdf64\x2dd57f\x2de7a0\x2d8357\x2d8cd4714feebf.mount: Deactivated successfully. Jan 20 01:55:34.485601 containerd[1643]: time="2026-01-20T01:55:34.481305051Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6455dcb75d-z7fp4,Uid:73120413-751d-4a6a-a82b-54ccc2e8bc99,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"60005c4506d9310da51e828c8d501c5dca5acf3ee7f47a20316b0784660d8618\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:34.513319 kubelet[3123]: E0120 01:55:34.506822 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60005c4506d9310da51e828c8d501c5dca5acf3ee7f47a20316b0784660d8618\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:34.534391 kubelet[3123]: E0120 01:55:34.521669 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60005c4506d9310da51e828c8d501c5dca5acf3ee7f47a20316b0784660d8618\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6455dcb75d-z7fp4" Jan 20 01:55:34.534391 kubelet[3123]: E0120 01:55:34.530670 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60005c4506d9310da51e828c8d501c5dca5acf3ee7f47a20316b0784660d8618\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6455dcb75d-z7fp4" Jan 20 01:55:34.538349 kubelet[3123]: E0120 01:55:34.535822 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6455dcb75d-z7fp4_calico-system(73120413-751d-4a6a-a82b-54ccc2e8bc99)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6455dcb75d-z7fp4_calico-system(73120413-751d-4a6a-a82b-54ccc2e8bc99)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60005c4506d9310da51e828c8d501c5dca5acf3ee7f47a20316b0784660d8618\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6455dcb75d-z7fp4" podUID="73120413-751d-4a6a-a82b-54ccc2e8bc99" Jan 20 01:55:34.704546 containerd[1643]: time="2026-01-20T01:55:34.701056037Z" level=error msg="Failed to destroy network for sandbox \"9775b534ef8510171e4c5476efd354226fd6683441cbb840cdc9f3d8d277beca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:34.748500 systemd[1]: run-netns-cni\x2dd120d671\x2d4923\x2d6c7d\x2de243\x2d268a43fe1c36.mount: Deactivated successfully. Jan 20 01:55:34.904001 containerd[1643]: time="2026-01-20T01:55:34.886207780Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x6f5h,Uid:eeb09d5e-8a63-4fca-910b-ea49fa1ecf05,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9775b534ef8510171e4c5476efd354226fd6683441cbb840cdc9f3d8d277beca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:34.923936 kubelet[3123]: E0120 01:55:34.895018 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9775b534ef8510171e4c5476efd354226fd6683441cbb840cdc9f3d8d277beca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:34.923936 kubelet[3123]: E0120 01:55:34.895089 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9775b534ef8510171e4c5476efd354226fd6683441cbb840cdc9f3d8d277beca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x6f5h" Jan 20 01:55:34.923936 kubelet[3123]: E0120 01:55:34.895119 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9775b534ef8510171e4c5476efd354226fd6683441cbb840cdc9f3d8d277beca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x6f5h" Jan 20 01:55:34.951340 kubelet[3123]: E0120 01:55:34.895247 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x6f5h_calico-system(eeb09d5e-8a63-4fca-910b-ea49fa1ecf05)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x6f5h_calico-system(eeb09d5e-8a63-4fca-910b-ea49fa1ecf05)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9775b534ef8510171e4c5476efd354226fd6683441cbb840cdc9f3d8d277beca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:55:35.829479 kubelet[3123]: E0120 01:55:35.822370 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:55:35.914211 containerd[1643]: time="2026-01-20T01:55:35.889844024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ft2sl,Uid:dce8f61b-70a0-47ff-b7a3-9a49a15c7261,Namespace:kube-system,Attempt:0,}" Jan 20 01:55:36.848496 containerd[1643]: time="2026-01-20T01:55:36.848039015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-szpvj,Uid:285811f9-e547-431f-a7b0-90e1226d2f4d,Namespace:calico-system,Attempt:0,}" Jan 20 01:55:38.799119 kubelet[3123]: E0120 01:55:38.799026 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:55:39.116953 containerd[1643]: time="2026-01-20T01:55:39.104904288Z" level=error msg="Failed to destroy network for sandbox \"c7026297428802c7fd879a65ea0dae13864ff2c1ae3ed555802b7705f6e3d088\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:39.164244 systemd[1]: run-netns-cni\x2d3b71a560\x2dd29a\x2d965a\x2df905\x2dfa2336f202b3.mount: Deactivated successfully. Jan 20 01:55:39.344432 containerd[1643]: time="2026-01-20T01:55:39.340433483Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-szpvj,Uid:285811f9-e547-431f-a7b0-90e1226d2f4d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7026297428802c7fd879a65ea0dae13864ff2c1ae3ed555802b7705f6e3d088\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:39.353908 kubelet[3123]: E0120 01:55:39.345060 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7026297428802c7fd879a65ea0dae13864ff2c1ae3ed555802b7705f6e3d088\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:39.354601 kubelet[3123]: E0120 01:55:39.354551 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7026297428802c7fd879a65ea0dae13864ff2c1ae3ed555802b7705f6e3d088\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-szpvj" Jan 20 01:55:39.354923 kubelet[3123]: E0120 01:55:39.354888 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7026297428802c7fd879a65ea0dae13864ff2c1ae3ed555802b7705f6e3d088\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-szpvj" Jan 20 01:55:39.420301 kubelet[3123]: E0120 01:55:39.400979 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-szpvj_calico-system(285811f9-e547-431f-a7b0-90e1226d2f4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-szpvj_calico-system(285811f9-e547-431f-a7b0-90e1226d2f4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7026297428802c7fd879a65ea0dae13864ff2c1ae3ed555802b7705f6e3d088\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 01:55:39.921825 containerd[1643]: time="2026-01-20T01:55:39.893542186Z" level=error msg="Failed to destroy network for sandbox \"7123650014162a10ce07bb5394bbb4849969d89fec4842e5a9a24cc1380d7820\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:39.977846 systemd[1]: run-netns-cni\x2d70e1578d\x2d983a\x2d50b2\x2d7dd2\x2d3c2e1a706974.mount: Deactivated successfully. Jan 20 01:55:40.100432 containerd[1643]: time="2026-01-20T01:55:40.087317468Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ft2sl,Uid:dce8f61b-70a0-47ff-b7a3-9a49a15c7261,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7123650014162a10ce07bb5394bbb4849969d89fec4842e5a9a24cc1380d7820\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:40.145536 kubelet[3123]: E0120 01:55:40.138992 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7123650014162a10ce07bb5394bbb4849969d89fec4842e5a9a24cc1380d7820\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:40.145536 kubelet[3123]: E0120 01:55:40.139220 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7123650014162a10ce07bb5394bbb4849969d89fec4842e5a9a24cc1380d7820\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ft2sl" Jan 20 01:55:40.145536 kubelet[3123]: E0120 01:55:40.139262 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7123650014162a10ce07bb5394bbb4849969d89fec4842e5a9a24cc1380d7820\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ft2sl" Jan 20 01:55:40.165475 kubelet[3123]: E0120 01:55:40.139334 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-ft2sl_kube-system(dce8f61b-70a0-47ff-b7a3-9a49a15c7261)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-ft2sl_kube-system(dce8f61b-70a0-47ff-b7a3-9a49a15c7261)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7123650014162a10ce07bb5394bbb4849969d89fec4842e5a9a24cc1380d7820\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-ft2sl" podUID="dce8f61b-70a0-47ff-b7a3-9a49a15c7261" Jan 20 01:55:43.794287 containerd[1643]: time="2026-01-20T01:55:43.787358038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d7bc7b4f-k5t2j,Uid:68cbc571-4445-4166-912c-8fdfe252aae2,Namespace:calico-system,Attempt:0,}" Jan 20 01:55:43.804501 containerd[1643]: time="2026-01-20T01:55:43.803087071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-r7ptx,Uid:03f653dd-0210-41e9-9d70-a3905826baa1,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:55:45.954017 kubelet[3123]: E0120 01:55:45.953786 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:55:46.186994 containerd[1643]: time="2026-01-20T01:55:46.003300983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x6f5h,Uid:eeb09d5e-8a63-4fca-910b-ea49fa1ecf05,Namespace:calico-system,Attempt:0,}" Jan 20 01:55:46.186994 containerd[1643]: time="2026-01-20T01:55:46.186371018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hknvv,Uid:05750e4a-a6e9-4631-9a1f-786fc076da7e,Namespace:kube-system,Attempt:0,}" Jan 20 01:55:46.235912 containerd[1643]: time="2026-01-20T01:55:46.192512550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-wbvft,Uid:589f656f-1e0a-4667-bc0d-42908aab3340,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:55:46.832811 containerd[1643]: time="2026-01-20T01:55:46.813923622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6455dcb75d-z7fp4,Uid:73120413-751d-4a6a-a82b-54ccc2e8bc99,Namespace:calico-system,Attempt:0,}" Jan 20 01:55:49.718577 containerd[1643]: time="2026-01-20T01:55:49.467522427Z" level=error msg="Failed to destroy network for sandbox \"8ec29060466fe5fe95eea2aee291f80204c7031f5f54fa283c712cba2cbf29bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:50.105635 systemd[1]: run-netns-cni\x2d2f9f5a48\x2d2bb8\x2d2545\x2de0f4\x2de121eccb0cbf.mount: Deactivated successfully. Jan 20 01:55:50.897201 kubelet[3123]: E0120 01:55:50.896938 3123 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.09s" Jan 20 01:55:51.011946 containerd[1643]: time="2026-01-20T01:55:50.991787688Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d7bc7b4f-k5t2j,Uid:68cbc571-4445-4166-912c-8fdfe252aae2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ec29060466fe5fe95eea2aee291f80204c7031f5f54fa283c712cba2cbf29bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:51.081934 kubelet[3123]: E0120 01:55:51.081841 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ec29060466fe5fe95eea2aee291f80204c7031f5f54fa283c712cba2cbf29bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:51.082416 kubelet[3123]: E0120 01:55:51.082374 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ec29060466fe5fe95eea2aee291f80204c7031f5f54fa283c712cba2cbf29bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" Jan 20 01:55:51.082882 kubelet[3123]: E0120 01:55:51.082847 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ec29060466fe5fe95eea2aee291f80204c7031f5f54fa283c712cba2cbf29bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" Jan 20 01:55:51.083547 kubelet[3123]: E0120 01:55:51.083503 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86d7bc7b4f-k5t2j_calico-system(68cbc571-4445-4166-912c-8fdfe252aae2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86d7bc7b4f-k5t2j_calico-system(68cbc571-4445-4166-912c-8fdfe252aae2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ec29060466fe5fe95eea2aee291f80204c7031f5f54fa283c712cba2cbf29bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 01:55:51.949585 containerd[1643]: time="2026-01-20T01:55:51.949499559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-szpvj,Uid:285811f9-e547-431f-a7b0-90e1226d2f4d,Namespace:calico-system,Attempt:0,}" Jan 20 01:55:52.504195 containerd[1643]: time="2026-01-20T01:55:52.313508900Z" level=error msg="Failed to destroy network for sandbox \"faf06dfbb7fff462fb1b72171f80be6efc9605c1e669605840ee4a8d6ebf1b89\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:52.614422 systemd[1]: run-netns-cni\x2d279b1791\x2dd7a7\x2d3679\x2d0edd\x2d876e5c178b98.mount: Deactivated successfully. Jan 20 01:55:52.812513 containerd[1643]: time="2026-01-20T01:55:52.804437305Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-r7ptx,Uid:03f653dd-0210-41e9-9d70-a3905826baa1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"faf06dfbb7fff462fb1b72171f80be6efc9605c1e669605840ee4a8d6ebf1b89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:52.826297 kubelet[3123]: E0120 01:55:52.824256 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"faf06dfbb7fff462fb1b72171f80be6efc9605c1e669605840ee4a8d6ebf1b89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:52.826297 kubelet[3123]: E0120 01:55:52.824344 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"faf06dfbb7fff462fb1b72171f80be6efc9605c1e669605840ee4a8d6ebf1b89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" Jan 20 01:55:52.826297 kubelet[3123]: E0120 01:55:52.824376 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"faf06dfbb7fff462fb1b72171f80be6efc9605c1e669605840ee4a8d6ebf1b89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" Jan 20 01:55:52.827282 kubelet[3123]: E0120 01:55:52.824589 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74b798b596-r7ptx_calico-apiserver(03f653dd-0210-41e9-9d70-a3905826baa1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74b798b596-r7ptx_calico-apiserver(03f653dd-0210-41e9-9d70-a3905826baa1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"faf06dfbb7fff462fb1b72171f80be6efc9605c1e669605840ee4a8d6ebf1b89\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 01:55:54.061418 kubelet[3123]: E0120 01:55:54.031629 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:55:54.156633 containerd[1643]: time="2026-01-20T01:55:54.156570660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ft2sl,Uid:dce8f61b-70a0-47ff-b7a3-9a49a15c7261,Namespace:kube-system,Attempt:0,}" Jan 20 01:55:54.245646 kubelet[3123]: E0120 01:55:54.241273 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:55:56.006145 containerd[1643]: time="2026-01-20T01:55:56.006005935Z" level=error msg="Failed to destroy network for sandbox \"b02e6cdf30a0d4b2307e5409654f56c5a616aca98616710f88178af72d02d876\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:56.026857 systemd[1]: run-netns-cni\x2dc506559a\x2d8654\x2d1a40\x2d5829\x2d84b01b732db2.mount: Deactivated successfully. Jan 20 01:55:56.283800 containerd[1643]: time="2026-01-20T01:55:56.280452403Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x6f5h,Uid:eeb09d5e-8a63-4fca-910b-ea49fa1ecf05,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b02e6cdf30a0d4b2307e5409654f56c5a616aca98616710f88178af72d02d876\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:56.287231 kubelet[3123]: E0120 01:55:56.283567 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b02e6cdf30a0d4b2307e5409654f56c5a616aca98616710f88178af72d02d876\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:56.287231 kubelet[3123]: E0120 01:55:56.283650 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b02e6cdf30a0d4b2307e5409654f56c5a616aca98616710f88178af72d02d876\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x6f5h" Jan 20 01:55:56.291367 kubelet[3123]: E0120 01:55:56.291119 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b02e6cdf30a0d4b2307e5409654f56c5a616aca98616710f88178af72d02d876\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x6f5h" Jan 20 01:55:56.291367 kubelet[3123]: E0120 01:55:56.291256 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x6f5h_calico-system(eeb09d5e-8a63-4fca-910b-ea49fa1ecf05)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x6f5h_calico-system(eeb09d5e-8a63-4fca-910b-ea49fa1ecf05)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b02e6cdf30a0d4b2307e5409654f56c5a616aca98616710f88178af72d02d876\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:55:56.582446 containerd[1643]: time="2026-01-20T01:55:56.543965554Z" level=error msg="Failed to destroy network for sandbox \"c7b50cd089989c272cb9a1ff90d4472962eb68e3a86fc1830f5a986dd1e2b8c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:56.568012 systemd[1]: run-netns-cni\x2d40a42873\x2dfbed\x2da319\x2d7ab0\x2d928c43a57ad0.mount: Deactivated successfully. Jan 20 01:55:56.734959 containerd[1643]: time="2026-01-20T01:55:56.734453281Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hknvv,Uid:05750e4a-a6e9-4631-9a1f-786fc076da7e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7b50cd089989c272cb9a1ff90d4472962eb68e3a86fc1830f5a986dd1e2b8c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:56.757857 kubelet[3123]: E0120 01:55:56.752791 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7b50cd089989c272cb9a1ff90d4472962eb68e3a86fc1830f5a986dd1e2b8c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:56.757857 kubelet[3123]: E0120 01:55:56.752890 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7b50cd089989c272cb9a1ff90d4472962eb68e3a86fc1830f5a986dd1e2b8c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hknvv" Jan 20 01:55:56.757857 kubelet[3123]: E0120 01:55:56.752929 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7b50cd089989c272cb9a1ff90d4472962eb68e3a86fc1830f5a986dd1e2b8c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hknvv" Jan 20 01:55:56.758372 kubelet[3123]: E0120 01:55:56.752998 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-hknvv_kube-system(05750e4a-a6e9-4631-9a1f-786fc076da7e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-hknvv_kube-system(05750e4a-a6e9-4631-9a1f-786fc076da7e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7b50cd089989c272cb9a1ff90d4472962eb68e3a86fc1830f5a986dd1e2b8c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-hknvv" podUID="05750e4a-a6e9-4631-9a1f-786fc076da7e" Jan 20 01:55:57.093306 containerd[1643]: time="2026-01-20T01:55:57.092422649Z" level=error msg="Failed to destroy network for sandbox \"7e4c9dcc49d30e6e11bb223d90cdb522849a2cf6eb4beca7db711607689df842\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:57.128436 systemd[1]: run-netns-cni\x2d67aff9cb\x2dd287\x2d0241\x2dde67\x2d41506c81a28c.mount: Deactivated successfully. Jan 20 01:55:57.205428 containerd[1643]: time="2026-01-20T01:55:57.197160856Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-wbvft,Uid:589f656f-1e0a-4667-bc0d-42908aab3340,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e4c9dcc49d30e6e11bb223d90cdb522849a2cf6eb4beca7db711607689df842\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:57.265313 kubelet[3123]: E0120 01:55:57.210312 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e4c9dcc49d30e6e11bb223d90cdb522849a2cf6eb4beca7db711607689df842\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:57.265313 kubelet[3123]: E0120 01:55:57.210414 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e4c9dcc49d30e6e11bb223d90cdb522849a2cf6eb4beca7db711607689df842\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" Jan 20 01:55:57.265313 kubelet[3123]: E0120 01:55:57.210457 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e4c9dcc49d30e6e11bb223d90cdb522849a2cf6eb4beca7db711607689df842\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" Jan 20 01:55:57.277326 kubelet[3123]: E0120 01:55:57.210537 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74b798b596-wbvft_calico-apiserver(589f656f-1e0a-4667-bc0d-42908aab3340)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74b798b596-wbvft_calico-apiserver(589f656f-1e0a-4667-bc0d-42908aab3340)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e4c9dcc49d30e6e11bb223d90cdb522849a2cf6eb4beca7db711607689df842\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 01:55:57.481939 containerd[1643]: time="2026-01-20T01:55:57.480268556Z" level=error msg="Failed to destroy network for sandbox \"29fce84b79890d37524a25b2928bac4f0ba13696ebf7d6ffa82d2141e986e106\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:57.504806 systemd[1]: run-netns-cni\x2d74c56619\x2dd1ed\x2d601d\x2d7e5b\x2d3dc73d268db4.mount: Deactivated successfully. Jan 20 01:55:57.781023 kubelet[3123]: E0120 01:55:57.780472 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:55:58.069251 containerd[1643]: time="2026-01-20T01:55:58.064980730Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6455dcb75d-z7fp4,Uid:73120413-751d-4a6a-a82b-54ccc2e8bc99,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"29fce84b79890d37524a25b2928bac4f0ba13696ebf7d6ffa82d2141e986e106\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:58.095880 kubelet[3123]: E0120 01:55:58.087015 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29fce84b79890d37524a25b2928bac4f0ba13696ebf7d6ffa82d2141e986e106\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:58.116341 kubelet[3123]: E0120 01:55:58.099998 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29fce84b79890d37524a25b2928bac4f0ba13696ebf7d6ffa82d2141e986e106\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6455dcb75d-z7fp4" Jan 20 01:55:58.138385 kubelet[3123]: E0120 01:55:58.114016 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29fce84b79890d37524a25b2928bac4f0ba13696ebf7d6ffa82d2141e986e106\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6455dcb75d-z7fp4" Jan 20 01:55:58.138385 kubelet[3123]: E0120 01:55:58.128862 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6455dcb75d-z7fp4_calico-system(73120413-751d-4a6a-a82b-54ccc2e8bc99)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6455dcb75d-z7fp4_calico-system(73120413-751d-4a6a-a82b-54ccc2e8bc99)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"29fce84b79890d37524a25b2928bac4f0ba13696ebf7d6ffa82d2141e986e106\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6455dcb75d-z7fp4" podUID="73120413-751d-4a6a-a82b-54ccc2e8bc99" Jan 20 01:55:58.476569 containerd[1643]: time="2026-01-20T01:55:58.454653429Z" level=error msg="Failed to destroy network for sandbox \"fdce4159227a11a0838398234b5050cb18e5d399381178b3702b19135c053713\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:58.520001 systemd[1]: run-netns-cni\x2d45e5cc60\x2db7af\x2da060\x2dbd79\x2d4918fdd769a6.mount: Deactivated successfully. Jan 20 01:55:58.651809 containerd[1643]: time="2026-01-20T01:55:58.632450938Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ft2sl,Uid:dce8f61b-70a0-47ff-b7a3-9a49a15c7261,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdce4159227a11a0838398234b5050cb18e5d399381178b3702b19135c053713\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:58.655302 kubelet[3123]: E0120 01:55:58.632977 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdce4159227a11a0838398234b5050cb18e5d399381178b3702b19135c053713\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:58.655302 kubelet[3123]: E0120 01:55:58.640468 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdce4159227a11a0838398234b5050cb18e5d399381178b3702b19135c053713\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ft2sl" Jan 20 01:55:58.655302 kubelet[3123]: E0120 01:55:58.640523 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdce4159227a11a0838398234b5050cb18e5d399381178b3702b19135c053713\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ft2sl" Jan 20 01:55:58.655524 kubelet[3123]: E0120 01:55:58.640645 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-ft2sl_kube-system(dce8f61b-70a0-47ff-b7a3-9a49a15c7261)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-ft2sl_kube-system(dce8f61b-70a0-47ff-b7a3-9a49a15c7261)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fdce4159227a11a0838398234b5050cb18e5d399381178b3702b19135c053713\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-ft2sl" podUID="dce8f61b-70a0-47ff-b7a3-9a49a15c7261" Jan 20 01:55:58.831415 kubelet[3123]: E0120 01:55:58.799525 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:55:59.345859 containerd[1643]: time="2026-01-20T01:55:59.299962378Z" level=error msg="Failed to destroy network for sandbox \"97784e2e13fe6b4988c44c0eca6b7eef88fc79490d0c0693b7484f3eb603bcef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:59.416904 systemd[1]: run-netns-cni\x2d2c63642b\x2d1a70\x2dd0a2\x2d8b70\x2dc0f81fd88c8d.mount: Deactivated successfully. Jan 20 01:55:59.521975 containerd[1643]: time="2026-01-20T01:55:59.521629444Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-szpvj,Uid:285811f9-e547-431f-a7b0-90e1226d2f4d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"97784e2e13fe6b4988c44c0eca6b7eef88fc79490d0c0693b7484f3eb603bcef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:59.545425 kubelet[3123]: E0120 01:55:59.545362 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97784e2e13fe6b4988c44c0eca6b7eef88fc79490d0c0693b7484f3eb603bcef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:55:59.546044 kubelet[3123]: E0120 01:55:59.546008 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97784e2e13fe6b4988c44c0eca6b7eef88fc79490d0c0693b7484f3eb603bcef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-szpvj" Jan 20 01:55:59.546044 kubelet[3123]: E0120 01:55:59.553410 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97784e2e13fe6b4988c44c0eca6b7eef88fc79490d0c0693b7484f3eb603bcef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-szpvj" Jan 20 01:55:59.546044 kubelet[3123]: E0120 01:55:59.553632 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-szpvj_calico-system(285811f9-e547-431f-a7b0-90e1226d2f4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-szpvj_calico-system(285811f9-e547-431f-a7b0-90e1226d2f4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97784e2e13fe6b4988c44c0eca6b7eef88fc79490d0c0693b7484f3eb603bcef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 01:56:04.937793 containerd[1643]: time="2026-01-20T01:56:04.828028567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-r7ptx,Uid:03f653dd-0210-41e9-9d70-a3905826baa1,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:56:05.093192 containerd[1643]: time="2026-01-20T01:56:05.091470393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d7bc7b4f-k5t2j,Uid:68cbc571-4445-4166-912c-8fdfe252aae2,Namespace:calico-system,Attempt:0,}" Jan 20 01:56:09.143080 containerd[1643]: time="2026-01-20T01:56:09.143026230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x6f5h,Uid:eeb09d5e-8a63-4fca-910b-ea49fa1ecf05,Namespace:calico-system,Attempt:0,}" Jan 20 01:56:09.181497 containerd[1643]: time="2026-01-20T01:56:09.158122777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-wbvft,Uid:589f656f-1e0a-4667-bc0d-42908aab3340,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:56:10.040842 containerd[1643]: time="2026-01-20T01:56:10.040273745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6455dcb75d-z7fp4,Uid:73120413-751d-4a6a-a82b-54ccc2e8bc99,Namespace:calico-system,Attempt:0,}" Jan 20 01:56:10.951456 containerd[1643]: time="2026-01-20T01:56:10.947909681Z" level=error msg="Failed to destroy network for sandbox \"b326c640a221ba7d6629b0fdb717ec3fea859d547867ebf5467af5565322ce71\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:10.985115 systemd[1]: run-netns-cni\x2db66b5af8\x2d1c91\x2d4f1d\x2d2b4e\x2d7b1cdae5ff58.mount: Deactivated successfully. Jan 20 01:56:11.302303 containerd[1643]: time="2026-01-20T01:56:11.275476800Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d7bc7b4f-k5t2j,Uid:68cbc571-4445-4166-912c-8fdfe252aae2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b326c640a221ba7d6629b0fdb717ec3fea859d547867ebf5467af5565322ce71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:11.304083 kubelet[3123]: E0120 01:56:11.304024 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b326c640a221ba7d6629b0fdb717ec3fea859d547867ebf5467af5565322ce71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:11.305024 kubelet[3123]: E0120 01:56:11.304984 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b326c640a221ba7d6629b0fdb717ec3fea859d547867ebf5467af5565322ce71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" Jan 20 01:56:11.316430 kubelet[3123]: E0120 01:56:11.310382 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b326c640a221ba7d6629b0fdb717ec3fea859d547867ebf5467af5565322ce71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" Jan 20 01:56:11.324971 kubelet[3123]: E0120 01:56:11.324894 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86d7bc7b4f-k5t2j_calico-system(68cbc571-4445-4166-912c-8fdfe252aae2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86d7bc7b4f-k5t2j_calico-system(68cbc571-4445-4166-912c-8fdfe252aae2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b326c640a221ba7d6629b0fdb717ec3fea859d547867ebf5467af5565322ce71\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 01:56:11.645985 containerd[1643]: time="2026-01-20T01:56:11.635334644Z" level=error msg="Failed to destroy network for sandbox \"d3df229a6b1c813310eca2e505b56b64d7f693cd4cbb037fccd6c20bffa9f182\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:11.675923 systemd[1]: run-netns-cni\x2d94429ad9\x2d5a62\x2d28b8\x2d0383\x2d41b454712e51.mount: Deactivated successfully. Jan 20 01:56:12.000081 kubelet[3123]: E0120 01:56:11.721057 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3df229a6b1c813310eca2e505b56b64d7f693cd4cbb037fccd6c20bffa9f182\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:12.000081 kubelet[3123]: E0120 01:56:11.721154 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3df229a6b1c813310eca2e505b56b64d7f693cd4cbb037fccd6c20bffa9f182\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" Jan 20 01:56:12.000081 kubelet[3123]: E0120 01:56:11.751801 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3df229a6b1c813310eca2e505b56b64d7f693cd4cbb037fccd6c20bffa9f182\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" Jan 20 01:56:12.000533 containerd[1643]: time="2026-01-20T01:56:11.720119219Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-r7ptx,Uid:03f653dd-0210-41e9-9d70-a3905826baa1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3df229a6b1c813310eca2e505b56b64d7f693cd4cbb037fccd6c20bffa9f182\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:12.000533 containerd[1643]: time="2026-01-20T01:56:11.818331697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hknvv,Uid:05750e4a-a6e9-4631-9a1f-786fc076da7e,Namespace:kube-system,Attempt:0,}" Jan 20 01:56:12.000533 containerd[1643]: time="2026-01-20T01:56:11.899655064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-szpvj,Uid:285811f9-e547-431f-a7b0-90e1226d2f4d,Namespace:calico-system,Attempt:0,}" Jan 20 01:56:12.001435 kubelet[3123]: E0120 01:56:11.757100 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74b798b596-r7ptx_calico-apiserver(03f653dd-0210-41e9-9d70-a3905826baa1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74b798b596-r7ptx_calico-apiserver(03f653dd-0210-41e9-9d70-a3905826baa1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3df229a6b1c813310eca2e505b56b64d7f693cd4cbb037fccd6c20bffa9f182\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 01:56:12.001435 kubelet[3123]: E0120 01:56:11.797066 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:56:12.779413 kubelet[3123]: E0120 01:56:12.778820 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:56:12.806947 containerd[1643]: time="2026-01-20T01:56:12.804922732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ft2sl,Uid:dce8f61b-70a0-47ff-b7a3-9a49a15c7261,Namespace:kube-system,Attempt:0,}" Jan 20 01:56:13.044478 containerd[1643]: time="2026-01-20T01:56:13.041070241Z" level=error msg="Failed to destroy network for sandbox \"34085babebecb4b60bc96719d5fca9d7988af4bcfc514db6623d544cc1c501fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:13.124888 systemd[1]: run-netns-cni\x2dd2888f8a\x2d75c4\x2d1cf0\x2d8b02\x2db1f231204cbc.mount: Deactivated successfully. Jan 20 01:56:13.282654 containerd[1643]: time="2026-01-20T01:56:13.280579957Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6455dcb75d-z7fp4,Uid:73120413-751d-4a6a-a82b-54ccc2e8bc99,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"34085babebecb4b60bc96719d5fca9d7988af4bcfc514db6623d544cc1c501fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:13.283153 kubelet[3123]: E0120 01:56:13.280965 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34085babebecb4b60bc96719d5fca9d7988af4bcfc514db6623d544cc1c501fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:13.283153 kubelet[3123]: E0120 01:56:13.281166 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34085babebecb4b60bc96719d5fca9d7988af4bcfc514db6623d544cc1c501fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6455dcb75d-z7fp4" Jan 20 01:56:13.283153 kubelet[3123]: E0120 01:56:13.281259 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34085babebecb4b60bc96719d5fca9d7988af4bcfc514db6623d544cc1c501fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6455dcb75d-z7fp4" Jan 20 01:56:13.283404 kubelet[3123]: E0120 01:56:13.281340 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6455dcb75d-z7fp4_calico-system(73120413-751d-4a6a-a82b-54ccc2e8bc99)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6455dcb75d-z7fp4_calico-system(73120413-751d-4a6a-a82b-54ccc2e8bc99)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34085babebecb4b60bc96719d5fca9d7988af4bcfc514db6623d544cc1c501fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6455dcb75d-z7fp4" podUID="73120413-751d-4a6a-a82b-54ccc2e8bc99" Jan 20 01:56:13.823922 containerd[1643]: time="2026-01-20T01:56:13.821968161Z" level=error msg="Failed to destroy network for sandbox \"38f3692246acb49296a96680dbd4d617135a67605366e73a0f14e96db3171f9c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:13.834164 systemd[1]: run-netns-cni\x2daf067046\x2d3a25\x2dd8a9\x2dee93\x2d7401aab84a63.mount: Deactivated successfully. Jan 20 01:56:13.845640 containerd[1643]: time="2026-01-20T01:56:13.845168840Z" level=error msg="Failed to destroy network for sandbox \"2e5fa008a172b7cc28ee91d43a418e105b18c9770098822d83140a1aa808622b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:13.871775 systemd[1]: run-netns-cni\x2de6a1c71c\x2d4b74\x2de81e\x2dd8dc\x2d406fe415e1f1.mount: Deactivated successfully. Jan 20 01:56:13.900546 containerd[1643]: time="2026-01-20T01:56:13.892670314Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-wbvft,Uid:589f656f-1e0a-4667-bc0d-42908aab3340,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"38f3692246acb49296a96680dbd4d617135a67605366e73a0f14e96db3171f9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:13.907546 kubelet[3123]: E0120 01:56:13.903653 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38f3692246acb49296a96680dbd4d617135a67605366e73a0f14e96db3171f9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:13.907546 kubelet[3123]: E0120 01:56:13.903871 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38f3692246acb49296a96680dbd4d617135a67605366e73a0f14e96db3171f9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" Jan 20 01:56:13.907546 kubelet[3123]: E0120 01:56:13.903905 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38f3692246acb49296a96680dbd4d617135a67605366e73a0f14e96db3171f9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" Jan 20 01:56:13.916374 kubelet[3123]: E0120 01:56:13.903973 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74b798b596-wbvft_calico-apiserver(589f656f-1e0a-4667-bc0d-42908aab3340)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74b798b596-wbvft_calico-apiserver(589f656f-1e0a-4667-bc0d-42908aab3340)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38f3692246acb49296a96680dbd4d617135a67605366e73a0f14e96db3171f9c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 01:56:13.936339 containerd[1643]: time="2026-01-20T01:56:13.935235794Z" level=error msg="Failed to destroy network for sandbox \"1c3fe53e73a0566671c9e838ec29a6ded489cdf4ccc7132bf8439f4fc03c6168\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:13.969408 containerd[1643]: time="2026-01-20T01:56:13.963265878Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x6f5h,Uid:eeb09d5e-8a63-4fca-910b-ea49fa1ecf05,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e5fa008a172b7cc28ee91d43a418e105b18c9770098822d83140a1aa808622b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:13.969766 kubelet[3123]: E0120 01:56:13.969127 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e5fa008a172b7cc28ee91d43a418e105b18c9770098822d83140a1aa808622b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:13.969766 kubelet[3123]: E0120 01:56:13.969262 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e5fa008a172b7cc28ee91d43a418e105b18c9770098822d83140a1aa808622b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x6f5h" Jan 20 01:56:13.969766 kubelet[3123]: E0120 01:56:13.969302 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e5fa008a172b7cc28ee91d43a418e105b18c9770098822d83140a1aa808622b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x6f5h" Jan 20 01:56:13.969974 kubelet[3123]: E0120 01:56:13.969385 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x6f5h_calico-system(eeb09d5e-8a63-4fca-910b-ea49fa1ecf05)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x6f5h_calico-system(eeb09d5e-8a63-4fca-910b-ea49fa1ecf05)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e5fa008a172b7cc28ee91d43a418e105b18c9770098822d83140a1aa808622b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:56:13.985753 systemd[1]: run-netns-cni\x2d1e165d64\x2dad99\x2d46c5\x2dc1ef\x2d3c3145c4cad2.mount: Deactivated successfully. Jan 20 01:56:14.103855 containerd[1643]: time="2026-01-20T01:56:14.103420959Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-szpvj,Uid:285811f9-e547-431f-a7b0-90e1226d2f4d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c3fe53e73a0566671c9e838ec29a6ded489cdf4ccc7132bf8439f4fc03c6168\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:14.109816 kubelet[3123]: E0120 01:56:14.104800 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c3fe53e73a0566671c9e838ec29a6ded489cdf4ccc7132bf8439f4fc03c6168\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:14.109816 kubelet[3123]: E0120 01:56:14.104911 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c3fe53e73a0566671c9e838ec29a6ded489cdf4ccc7132bf8439f4fc03c6168\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-szpvj" Jan 20 01:56:14.109816 kubelet[3123]: E0120 01:56:14.104946 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c3fe53e73a0566671c9e838ec29a6ded489cdf4ccc7132bf8439f4fc03c6168\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-szpvj" Jan 20 01:56:14.110051 kubelet[3123]: E0120 01:56:14.105014 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-szpvj_calico-system(285811f9-e547-431f-a7b0-90e1226d2f4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-szpvj_calico-system(285811f9-e547-431f-a7b0-90e1226d2f4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c3fe53e73a0566671c9e838ec29a6ded489cdf4ccc7132bf8439f4fc03c6168\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 01:56:14.344231 containerd[1643]: time="2026-01-20T01:56:14.340843746Z" level=error msg="Failed to destroy network for sandbox \"f1dab2bd166184ebbace179a7a2951fc811b2a59444cd0127350b078af4d2f79\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:14.352999 systemd[1]: run-netns-cni\x2d3dbf7c75\x2de66a\x2d8daa\x2d264a\x2d8698cb569c50.mount: Deactivated successfully. Jan 20 01:56:14.445077 containerd[1643]: time="2026-01-20T01:56:14.443429836Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hknvv,Uid:05750e4a-a6e9-4631-9a1f-786fc076da7e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1dab2bd166184ebbace179a7a2951fc811b2a59444cd0127350b078af4d2f79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:14.445371 kubelet[3123]: E0120 01:56:14.443849 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1dab2bd166184ebbace179a7a2951fc811b2a59444cd0127350b078af4d2f79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:14.445371 kubelet[3123]: E0120 01:56:14.443923 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1dab2bd166184ebbace179a7a2951fc811b2a59444cd0127350b078af4d2f79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hknvv" Jan 20 01:56:14.445371 kubelet[3123]: E0120 01:56:14.443958 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1dab2bd166184ebbace179a7a2951fc811b2a59444cd0127350b078af4d2f79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hknvv" Jan 20 01:56:14.450073 kubelet[3123]: E0120 01:56:14.444025 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-hknvv_kube-system(05750e4a-a6e9-4631-9a1f-786fc076da7e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-hknvv_kube-system(05750e4a-a6e9-4631-9a1f-786fc076da7e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1dab2bd166184ebbace179a7a2951fc811b2a59444cd0127350b078af4d2f79\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-hknvv" podUID="05750e4a-a6e9-4631-9a1f-786fc076da7e" Jan 20 01:56:14.922365 containerd[1643]: time="2026-01-20T01:56:14.919946587Z" level=error msg="Failed to destroy network for sandbox \"df5875f4258957704f7974eed01d9b7655193982137435a925a290d1909b8964\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:14.933886 systemd[1]: run-netns-cni\x2dd9088095\x2d60f9\x2d8dc0\x2de9c5\x2d811c1f0b9c73.mount: Deactivated successfully. Jan 20 01:56:15.049057 containerd[1643]: time="2026-01-20T01:56:15.036142622Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ft2sl,Uid:dce8f61b-70a0-47ff-b7a3-9a49a15c7261,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"df5875f4258957704f7974eed01d9b7655193982137435a925a290d1909b8964\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:15.049458 kubelet[3123]: E0120 01:56:15.041075 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df5875f4258957704f7974eed01d9b7655193982137435a925a290d1909b8964\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:15.049458 kubelet[3123]: E0120 01:56:15.041174 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df5875f4258957704f7974eed01d9b7655193982137435a925a290d1909b8964\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ft2sl" Jan 20 01:56:15.049458 kubelet[3123]: E0120 01:56:15.044110 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df5875f4258957704f7974eed01d9b7655193982137435a925a290d1909b8964\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ft2sl" Jan 20 01:56:15.050191 kubelet[3123]: E0120 01:56:15.049836 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-ft2sl_kube-system(dce8f61b-70a0-47ff-b7a3-9a49a15c7261)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-ft2sl_kube-system(dce8f61b-70a0-47ff-b7a3-9a49a15c7261)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df5875f4258957704f7974eed01d9b7655193982137435a925a290d1909b8964\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-ft2sl" podUID="dce8f61b-70a0-47ff-b7a3-9a49a15c7261" Jan 20 01:56:19.926539 kubelet[3123]: E0120 01:56:19.926491 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:56:22.783389 containerd[1643]: time="2026-01-20T01:56:22.780798911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-r7ptx,Uid:03f653dd-0210-41e9-9d70-a3905826baa1,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:56:22.783389 containerd[1643]: time="2026-01-20T01:56:22.782476972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d7bc7b4f-k5t2j,Uid:68cbc571-4445-4166-912c-8fdfe252aae2,Namespace:calico-system,Attempt:0,}" Jan 20 01:56:23.999434 containerd[1643]: time="2026-01-20T01:56:23.999357595Z" level=error msg="Failed to destroy network for sandbox \"8bc9b2af94387e5de5d36e2ba5a20713d46b0affaf3716e9a84b4380f91ba5a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:24.041141 systemd[1]: run-netns-cni\x2da42f1f49\x2dc2ab\x2dfe61\x2d2c5c\x2d11374662b529.mount: Deactivated successfully. Jan 20 01:56:24.182186 containerd[1643]: time="2026-01-20T01:56:24.176104090Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d7bc7b4f-k5t2j,Uid:68cbc571-4445-4166-912c-8fdfe252aae2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bc9b2af94387e5de5d36e2ba5a20713d46b0affaf3716e9a84b4380f91ba5a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:24.182525 kubelet[3123]: E0120 01:56:24.176476 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bc9b2af94387e5de5d36e2ba5a20713d46b0affaf3716e9a84b4380f91ba5a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:24.182525 kubelet[3123]: E0120 01:56:24.176619 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bc9b2af94387e5de5d36e2ba5a20713d46b0affaf3716e9a84b4380f91ba5a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" Jan 20 01:56:24.182525 kubelet[3123]: E0120 01:56:24.176658 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bc9b2af94387e5de5d36e2ba5a20713d46b0affaf3716e9a84b4380f91ba5a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" Jan 20 01:56:24.185209 kubelet[3123]: E0120 01:56:24.185126 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86d7bc7b4f-k5t2j_calico-system(68cbc571-4445-4166-912c-8fdfe252aae2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86d7bc7b4f-k5t2j_calico-system(68cbc571-4445-4166-912c-8fdfe252aae2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8bc9b2af94387e5de5d36e2ba5a20713d46b0affaf3716e9a84b4380f91ba5a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 01:56:24.206533 containerd[1643]: time="2026-01-20T01:56:24.203934869Z" level=error msg="Failed to destroy network for sandbox \"311f4dacd5f7db63d91628faf4744677d018e779f610f586096eaf01b10b111a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:24.219586 systemd[1]: run-netns-cni\x2d79268aee\x2d6cdf\x2d47dd\x2d1909\x2dae18ef8dd4d6.mount: Deactivated successfully. Jan 20 01:56:24.260125 containerd[1643]: time="2026-01-20T01:56:24.259582998Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-r7ptx,Uid:03f653dd-0210-41e9-9d70-a3905826baa1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"311f4dacd5f7db63d91628faf4744677d018e779f610f586096eaf01b10b111a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:24.260457 kubelet[3123]: E0120 01:56:24.260153 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"311f4dacd5f7db63d91628faf4744677d018e779f610f586096eaf01b10b111a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:24.260457 kubelet[3123]: E0120 01:56:24.260249 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"311f4dacd5f7db63d91628faf4744677d018e779f610f586096eaf01b10b111a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" Jan 20 01:56:24.280237 kubelet[3123]: E0120 01:56:24.260289 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"311f4dacd5f7db63d91628faf4744677d018e779f610f586096eaf01b10b111a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" Jan 20 01:56:24.280237 kubelet[3123]: E0120 01:56:24.265823 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74b798b596-r7ptx_calico-apiserver(03f653dd-0210-41e9-9d70-a3905826baa1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74b798b596-r7ptx_calico-apiserver(03f653dd-0210-41e9-9d70-a3905826baa1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"311f4dacd5f7db63d91628faf4744677d018e779f610f586096eaf01b10b111a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 01:56:26.859629 containerd[1643]: time="2026-01-20T01:56:26.856771363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-szpvj,Uid:285811f9-e547-431f-a7b0-90e1226d2f4d,Namespace:calico-system,Attempt:0,}" Jan 20 01:56:26.888603 kubelet[3123]: E0120 01:56:26.883875 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:56:26.900309 containerd[1643]: time="2026-01-20T01:56:26.900017918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hknvv,Uid:05750e4a-a6e9-4631-9a1f-786fc076da7e,Namespace:kube-system,Attempt:0,}" Jan 20 01:56:26.961884 containerd[1643]: time="2026-01-20T01:56:26.933842207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-wbvft,Uid:589f656f-1e0a-4667-bc0d-42908aab3340,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:56:27.839579 containerd[1643]: time="2026-01-20T01:56:27.837873929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6455dcb75d-z7fp4,Uid:73120413-751d-4a6a-a82b-54ccc2e8bc99,Namespace:calico-system,Attempt:0,}" Jan 20 01:56:27.840192 containerd[1643]: time="2026-01-20T01:56:27.840160344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x6f5h,Uid:eeb09d5e-8a63-4fca-910b-ea49fa1ecf05,Namespace:calico-system,Attempt:0,}" Jan 20 01:56:28.831195 kubelet[3123]: E0120 01:56:28.797369 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:56:28.860495 containerd[1643]: time="2026-01-20T01:56:28.806934234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ft2sl,Uid:dce8f61b-70a0-47ff-b7a3-9a49a15c7261,Namespace:kube-system,Attempt:0,}" Jan 20 01:56:29.839050 containerd[1643]: time="2026-01-20T01:56:29.838245121Z" level=error msg="Failed to destroy network for sandbox \"6d182cbdd90d020b5ea498272845057e09176812eb2df5e41adeaa61b3da0786\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:29.886521 systemd[1]: run-netns-cni\x2d62c5b3b1\x2d3473\x2d4a91\x2d733b\x2d3bb2acc5b600.mount: Deactivated successfully. Jan 20 01:56:29.962550 containerd[1643]: time="2026-01-20T01:56:29.961380025Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-wbvft,Uid:589f656f-1e0a-4667-bc0d-42908aab3340,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d182cbdd90d020b5ea498272845057e09176812eb2df5e41adeaa61b3da0786\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:29.964902 kubelet[3123]: E0120 01:56:29.963855 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d182cbdd90d020b5ea498272845057e09176812eb2df5e41adeaa61b3da0786\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:29.964902 kubelet[3123]: E0120 01:56:29.963990 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d182cbdd90d020b5ea498272845057e09176812eb2df5e41adeaa61b3da0786\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" Jan 20 01:56:29.964902 kubelet[3123]: E0120 01:56:29.964027 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d182cbdd90d020b5ea498272845057e09176812eb2df5e41adeaa61b3da0786\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" Jan 20 01:56:29.965547 kubelet[3123]: E0120 01:56:29.964102 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74b798b596-wbvft_calico-apiserver(589f656f-1e0a-4667-bc0d-42908aab3340)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74b798b596-wbvft_calico-apiserver(589f656f-1e0a-4667-bc0d-42908aab3340)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d182cbdd90d020b5ea498272845057e09176812eb2df5e41adeaa61b3da0786\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 01:56:30.248907 containerd[1643]: time="2026-01-20T01:56:30.231169236Z" level=error msg="Failed to destroy network for sandbox \"a1b217cc62e34ec934960fcb37f2f0a4b56a9957e0af5dc292d39e1fb8c44de6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:30.289221 systemd[1]: run-netns-cni\x2d531fc35a\x2d4a12\x2dd242\x2d6b8d\x2d6a44314eaf38.mount: Deactivated successfully. Jan 20 01:56:30.367269 containerd[1643]: time="2026-01-20T01:56:30.365902820Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hknvv,Uid:05750e4a-a6e9-4631-9a1f-786fc076da7e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1b217cc62e34ec934960fcb37f2f0a4b56a9957e0af5dc292d39e1fb8c44de6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:30.397074 kubelet[3123]: E0120 01:56:30.394167 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1b217cc62e34ec934960fcb37f2f0a4b56a9957e0af5dc292d39e1fb8c44de6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:30.397074 kubelet[3123]: E0120 01:56:30.394268 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1b217cc62e34ec934960fcb37f2f0a4b56a9957e0af5dc292d39e1fb8c44de6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hknvv" Jan 20 01:56:30.397074 kubelet[3123]: E0120 01:56:30.394300 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1b217cc62e34ec934960fcb37f2f0a4b56a9957e0af5dc292d39e1fb8c44de6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hknvv" Jan 20 01:56:30.415809 kubelet[3123]: E0120 01:56:30.394364 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-hknvv_kube-system(05750e4a-a6e9-4631-9a1f-786fc076da7e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-hknvv_kube-system(05750e4a-a6e9-4631-9a1f-786fc076da7e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a1b217cc62e34ec934960fcb37f2f0a4b56a9957e0af5dc292d39e1fb8c44de6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-hknvv" podUID="05750e4a-a6e9-4631-9a1f-786fc076da7e" Jan 20 01:56:31.085927 containerd[1643]: time="2026-01-20T01:56:31.083506921Z" level=error msg="Failed to destroy network for sandbox \"4a18f859d5a869af104d54485dae92caf0e44c5c7b4d6211b7f8d36346c6d609\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:31.108579 systemd[1]: run-netns-cni\x2d66ac0176\x2d3223\x2d01a5\x2d58e4\x2d166d7f8bc476.mount: Deactivated successfully. Jan 20 01:56:31.224902 containerd[1643]: time="2026-01-20T01:56:31.220130257Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-szpvj,Uid:285811f9-e547-431f-a7b0-90e1226d2f4d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a18f859d5a869af104d54485dae92caf0e44c5c7b4d6211b7f8d36346c6d609\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:31.237106 kubelet[3123]: E0120 01:56:31.236970 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a18f859d5a869af104d54485dae92caf0e44c5c7b4d6211b7f8d36346c6d609\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:31.237106 kubelet[3123]: E0120 01:56:31.237061 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a18f859d5a869af104d54485dae92caf0e44c5c7b4d6211b7f8d36346c6d609\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-szpvj" Jan 20 01:56:31.237106 kubelet[3123]: E0120 01:56:31.237099 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a18f859d5a869af104d54485dae92caf0e44c5c7b4d6211b7f8d36346c6d609\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-szpvj" Jan 20 01:56:31.249223 kubelet[3123]: E0120 01:56:31.237170 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-szpvj_calico-system(285811f9-e547-431f-a7b0-90e1226d2f4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-szpvj_calico-system(285811f9-e547-431f-a7b0-90e1226d2f4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a18f859d5a869af104d54485dae92caf0e44c5c7b4d6211b7f8d36346c6d609\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 01:56:31.381889 containerd[1643]: time="2026-01-20T01:56:31.317189274Z" level=error msg="Failed to destroy network for sandbox \"a2cbe86e511d40730228cad263430e84e3f3837cafe09dba3fdb0ec7b9406fe4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:31.436878 systemd[1]: run-netns-cni\x2dbbe14700\x2d062c\x2d1630\x2d7ab7\x2dcee5809992a5.mount: Deactivated successfully. Jan 20 01:56:31.549905 containerd[1643]: time="2026-01-20T01:56:31.546816126Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x6f5h,Uid:eeb09d5e-8a63-4fca-910b-ea49fa1ecf05,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2cbe86e511d40730228cad263430e84e3f3837cafe09dba3fdb0ec7b9406fe4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:31.582841 kubelet[3123]: E0120 01:56:31.581364 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2cbe86e511d40730228cad263430e84e3f3837cafe09dba3fdb0ec7b9406fe4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:31.583269 kubelet[3123]: E0120 01:56:31.583233 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2cbe86e511d40730228cad263430e84e3f3837cafe09dba3fdb0ec7b9406fe4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x6f5h" Jan 20 01:56:31.586057 kubelet[3123]: E0120 01:56:31.585797 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2cbe86e511d40730228cad263430e84e3f3837cafe09dba3fdb0ec7b9406fe4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x6f5h" Jan 20 01:56:31.586057 kubelet[3123]: E0120 01:56:31.585902 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x6f5h_calico-system(eeb09d5e-8a63-4fca-910b-ea49fa1ecf05)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x6f5h_calico-system(eeb09d5e-8a63-4fca-910b-ea49fa1ecf05)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2cbe86e511d40730228cad263430e84e3f3837cafe09dba3fdb0ec7b9406fe4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:56:31.708749 containerd[1643]: time="2026-01-20T01:56:31.701841969Z" level=error msg="Failed to destroy network for sandbox \"7b9cfd22100cc8bf65130112f8a9124e307a4beeffa44ce718f89d1e3cff4956\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:31.725401 systemd[1]: run-netns-cni\x2d3e879066\x2d6d45\x2d0737\x2d30d9\x2df4ac96e645ef.mount: Deactivated successfully. Jan 20 01:56:31.745269 containerd[1643]: time="2026-01-20T01:56:31.728971090Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6455dcb75d-z7fp4,Uid:73120413-751d-4a6a-a82b-54ccc2e8bc99,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b9cfd22100cc8bf65130112f8a9124e307a4beeffa44ce718f89d1e3cff4956\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:31.745590 kubelet[3123]: E0120 01:56:31.729491 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b9cfd22100cc8bf65130112f8a9124e307a4beeffa44ce718f89d1e3cff4956\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:31.745590 kubelet[3123]: E0120 01:56:31.729578 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b9cfd22100cc8bf65130112f8a9124e307a4beeffa44ce718f89d1e3cff4956\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6455dcb75d-z7fp4" Jan 20 01:56:31.745590 kubelet[3123]: E0120 01:56:31.729611 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b9cfd22100cc8bf65130112f8a9124e307a4beeffa44ce718f89d1e3cff4956\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6455dcb75d-z7fp4" Jan 20 01:56:31.747073 kubelet[3123]: E0120 01:56:31.729768 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6455dcb75d-z7fp4_calico-system(73120413-751d-4a6a-a82b-54ccc2e8bc99)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6455dcb75d-z7fp4_calico-system(73120413-751d-4a6a-a82b-54ccc2e8bc99)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b9cfd22100cc8bf65130112f8a9124e307a4beeffa44ce718f89d1e3cff4956\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6455dcb75d-z7fp4" podUID="73120413-751d-4a6a-a82b-54ccc2e8bc99" Jan 20 01:56:31.833766 containerd[1643]: time="2026-01-20T01:56:31.830493157Z" level=error msg="Failed to destroy network for sandbox \"10cb4399b296c2e335c11c6e6bc55d3ca3ca18925581df00ff9222cc97f8b5bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:31.844358 systemd[1]: run-netns-cni\x2d3e24eb2c\x2d3d79\x2d8e51\x2dcc49\x2d534b16e8791d.mount: Deactivated successfully. Jan 20 01:56:32.035364 containerd[1643]: time="2026-01-20T01:56:32.032844733Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ft2sl,Uid:dce8f61b-70a0-47ff-b7a3-9a49a15c7261,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"10cb4399b296c2e335c11c6e6bc55d3ca3ca18925581df00ff9222cc97f8b5bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:32.045104 kubelet[3123]: E0120 01:56:32.033993 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10cb4399b296c2e335c11c6e6bc55d3ca3ca18925581df00ff9222cc97f8b5bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:32.045104 kubelet[3123]: E0120 01:56:32.034080 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10cb4399b296c2e335c11c6e6bc55d3ca3ca18925581df00ff9222cc97f8b5bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ft2sl" Jan 20 01:56:32.045104 kubelet[3123]: E0120 01:56:32.034118 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10cb4399b296c2e335c11c6e6bc55d3ca3ca18925581df00ff9222cc97f8b5bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ft2sl" Jan 20 01:56:32.045327 kubelet[3123]: E0120 01:56:32.034186 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-ft2sl_kube-system(dce8f61b-70a0-47ff-b7a3-9a49a15c7261)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-ft2sl_kube-system(dce8f61b-70a0-47ff-b7a3-9a49a15c7261)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"10cb4399b296c2e335c11c6e6bc55d3ca3ca18925581df00ff9222cc97f8b5bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-ft2sl" podUID="dce8f61b-70a0-47ff-b7a3-9a49a15c7261" Jan 20 01:56:35.789004 containerd[1643]: time="2026-01-20T01:56:35.788947336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d7bc7b4f-k5t2j,Uid:68cbc571-4445-4166-912c-8fdfe252aae2,Namespace:calico-system,Attempt:0,}" Jan 20 01:56:35.807453 containerd[1643]: time="2026-01-20T01:56:35.806906547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-r7ptx,Uid:03f653dd-0210-41e9-9d70-a3905826baa1,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:56:37.759451 containerd[1643]: time="2026-01-20T01:56:37.759179206Z" level=error msg="Failed to destroy network for sandbox \"05864a00199f8ad7de9f2b5bf4d25e9f33abd2429bf4c5f4f2fc8177932be0a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:37.791347 systemd[1]: run-netns-cni\x2dd4ba756b\x2dac70\x2d6429\x2dae07\x2d341e02c00c85.mount: Deactivated successfully. Jan 20 01:56:37.814869 containerd[1643]: time="2026-01-20T01:56:37.807113145Z" level=error msg="Failed to destroy network for sandbox \"381a95002830f3cefb5e4cc18cacbcc44a1c7cd5fddbe7efaca9a06aa57a3521\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:37.890236 systemd[1]: run-netns-cni\x2dcbce8d0b\x2d1ed2\x2d3582\x2db4d5\x2db83c539aee05.mount: Deactivated successfully. Jan 20 01:56:37.944079 containerd[1643]: time="2026-01-20T01:56:37.913460219Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d7bc7b4f-k5t2j,Uid:68cbc571-4445-4166-912c-8fdfe252aae2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"381a95002830f3cefb5e4cc18cacbcc44a1c7cd5fddbe7efaca9a06aa57a3521\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:38.099240 containerd[1643]: time="2026-01-20T01:56:37.969290214Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-r7ptx,Uid:03f653dd-0210-41e9-9d70-a3905826baa1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"05864a00199f8ad7de9f2b5bf4d25e9f33abd2429bf4c5f4f2fc8177932be0a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:38.206816 kubelet[3123]: E0120 01:56:38.044022 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05864a00199f8ad7de9f2b5bf4d25e9f33abd2429bf4c5f4f2fc8177932be0a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:38.206816 kubelet[3123]: E0120 01:56:38.055004 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"381a95002830f3cefb5e4cc18cacbcc44a1c7cd5fddbe7efaca9a06aa57a3521\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:38.206816 kubelet[3123]: E0120 01:56:38.058467 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"381a95002830f3cefb5e4cc18cacbcc44a1c7cd5fddbe7efaca9a06aa57a3521\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" Jan 20 01:56:38.206816 kubelet[3123]: E0120 01:56:38.065117 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"381a95002830f3cefb5e4cc18cacbcc44a1c7cd5fddbe7efaca9a06aa57a3521\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" Jan 20 01:56:38.225383 kubelet[3123]: E0120 01:56:38.065782 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86d7bc7b4f-k5t2j_calico-system(68cbc571-4445-4166-912c-8fdfe252aae2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86d7bc7b4f-k5t2j_calico-system(68cbc571-4445-4166-912c-8fdfe252aae2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"381a95002830f3cefb5e4cc18cacbcc44a1c7cd5fddbe7efaca9a06aa57a3521\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 01:56:38.225383 kubelet[3123]: E0120 01:56:38.161088 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05864a00199f8ad7de9f2b5bf4d25e9f33abd2429bf4c5f4f2fc8177932be0a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" Jan 20 01:56:38.225383 kubelet[3123]: E0120 01:56:38.208467 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05864a00199f8ad7de9f2b5bf4d25e9f33abd2429bf4c5f4f2fc8177932be0a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" Jan 20 01:56:38.229488 kubelet[3123]: E0120 01:56:38.208663 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74b798b596-r7ptx_calico-apiserver(03f653dd-0210-41e9-9d70-a3905826baa1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74b798b596-r7ptx_calico-apiserver(03f653dd-0210-41e9-9d70-a3905826baa1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"05864a00199f8ad7de9f2b5bf4d25e9f33abd2429bf4c5f4f2fc8177932be0a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 01:56:41.027924 containerd[1643]: time="2026-01-20T01:56:41.027862549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-wbvft,Uid:589f656f-1e0a-4667-bc0d-42908aab3340,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:56:41.867031 containerd[1643]: time="2026-01-20T01:56:41.864098501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-szpvj,Uid:285811f9-e547-431f-a7b0-90e1226d2f4d,Namespace:calico-system,Attempt:0,}" Jan 20 01:56:42.294283 containerd[1643]: time="2026-01-20T01:56:42.281597610Z" level=error msg="Failed to destroy network for sandbox \"e0dd8855ea52e9366163dba26827ac35b1784b11e6c151dd79ed27d10445dc21\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:42.319360 systemd[1]: run-netns-cni\x2da7989bd8\x2dbd02\x2df65c\x2d35e9\x2d61bdaa5f2a95.mount: Deactivated successfully. Jan 20 01:56:42.354376 containerd[1643]: time="2026-01-20T01:56:42.354242342Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-wbvft,Uid:589f656f-1e0a-4667-bc0d-42908aab3340,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0dd8855ea52e9366163dba26827ac35b1784b11e6c151dd79ed27d10445dc21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:42.360084 kubelet[3123]: E0120 01:56:42.360020 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0dd8855ea52e9366163dba26827ac35b1784b11e6c151dd79ed27d10445dc21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:42.376645 kubelet[3123]: E0120 01:56:42.375137 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0dd8855ea52e9366163dba26827ac35b1784b11e6c151dd79ed27d10445dc21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" Jan 20 01:56:42.376645 kubelet[3123]: E0120 01:56:42.375209 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0dd8855ea52e9366163dba26827ac35b1784b11e6c151dd79ed27d10445dc21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" Jan 20 01:56:42.376645 kubelet[3123]: E0120 01:56:42.375334 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74b798b596-wbvft_calico-apiserver(589f656f-1e0a-4667-bc0d-42908aab3340)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74b798b596-wbvft_calico-apiserver(589f656f-1e0a-4667-bc0d-42908aab3340)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0dd8855ea52e9366163dba26827ac35b1784b11e6c151dd79ed27d10445dc21\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 01:56:43.148481 containerd[1643]: time="2026-01-20T01:56:43.148240400Z" level=error msg="Failed to destroy network for sandbox \"2d33ab5a6ee1f633c5189ef4658bcc59e94cb71a80a591a4b1c81f08df38f062\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:43.266074 containerd[1643]: time="2026-01-20T01:56:43.255960202Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-szpvj,Uid:285811f9-e547-431f-a7b0-90e1226d2f4d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d33ab5a6ee1f633c5189ef4658bcc59e94cb71a80a591a4b1c81f08df38f062\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:43.266393 kubelet[3123]: E0120 01:56:43.257008 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d33ab5a6ee1f633c5189ef4658bcc59e94cb71a80a591a4b1c81f08df38f062\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:43.266393 kubelet[3123]: E0120 01:56:43.257111 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d33ab5a6ee1f633c5189ef4658bcc59e94cb71a80a591a4b1c81f08df38f062\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-szpvj" Jan 20 01:56:43.266393 kubelet[3123]: E0120 01:56:43.257145 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d33ab5a6ee1f633c5189ef4658bcc59e94cb71a80a591a4b1c81f08df38f062\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-szpvj" Jan 20 01:56:43.266666 kubelet[3123]: E0120 01:56:43.257220 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-szpvj_calico-system(285811f9-e547-431f-a7b0-90e1226d2f4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-szpvj_calico-system(285811f9-e547-431f-a7b0-90e1226d2f4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d33ab5a6ee1f633c5189ef4658bcc59e94cb71a80a591a4b1c81f08df38f062\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 01:56:43.288486 systemd[1]: run-netns-cni\x2dc9c0adc6\x2d6c73\x2d4dfa\x2d3a0c\x2d16c71c137d9f.mount: Deactivated successfully. Jan 20 01:56:45.796523 kubelet[3123]: E0120 01:56:45.796457 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:56:45.816982 containerd[1643]: time="2026-01-20T01:56:45.814637287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hknvv,Uid:05750e4a-a6e9-4631-9a1f-786fc076da7e,Namespace:kube-system,Attempt:0,}" Jan 20 01:56:46.485385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3239826756.mount: Deactivated successfully. Jan 20 01:56:46.840130 containerd[1643]: time="2026-01-20T01:56:46.835284300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6455dcb75d-z7fp4,Uid:73120413-751d-4a6a-a82b-54ccc2e8bc99,Namespace:calico-system,Attempt:0,}" Jan 20 01:56:46.840130 containerd[1643]: time="2026-01-20T01:56:46.835643397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x6f5h,Uid:eeb09d5e-8a63-4fca-910b-ea49fa1ecf05,Namespace:calico-system,Attempt:0,}" Jan 20 01:56:47.138273 containerd[1643]: time="2026-01-20T01:56:47.138062714Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:56:47.302221 containerd[1643]: time="2026-01-20T01:56:47.299223484Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:56:47.302625 containerd[1643]: time="2026-01-20T01:56:47.302584325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156882266" Jan 20 01:56:47.363162 containerd[1643]: time="2026-01-20T01:56:47.353854864Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:56:47.387194 containerd[1643]: time="2026-01-20T01:56:47.372672552Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 2m48.606571627s" Jan 20 01:56:47.387194 containerd[1643]: time="2026-01-20T01:56:47.386248414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 20 01:56:47.815921 containerd[1643]: time="2026-01-20T01:56:47.814250745Z" level=info msg="CreateContainer within sandbox \"90ef88d76c0030b947585dd9d3d8ec7e21848368d8abdfa9e4690c0429bf7b2a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 20 01:56:47.874569 kubelet[3123]: E0120 01:56:47.866667 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:56:47.909341 kubelet[3123]: E0120 01:56:47.909162 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:56:47.931098 containerd[1643]: time="2026-01-20T01:56:47.922778814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ft2sl,Uid:dce8f61b-70a0-47ff-b7a3-9a49a15c7261,Namespace:kube-system,Attempt:0,}" Jan 20 01:56:48.330282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4174744449.mount: Deactivated successfully. Jan 20 01:56:48.456105 containerd[1643]: time="2026-01-20T01:56:48.454303377Z" level=info msg="Container c9bf082ce41898c53b55b552a810d7329bff7ef745b6d94536608a0ea0f3df40: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:56:48.801012 containerd[1643]: time="2026-01-20T01:56:48.785756822Z" level=info msg="CreateContainer within sandbox \"90ef88d76c0030b947585dd9d3d8ec7e21848368d8abdfa9e4690c0429bf7b2a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c9bf082ce41898c53b55b552a810d7329bff7ef745b6d94536608a0ea0f3df40\"" Jan 20 01:56:48.801012 containerd[1643]: time="2026-01-20T01:56:48.789003130Z" level=info msg="StartContainer for \"c9bf082ce41898c53b55b552a810d7329bff7ef745b6d94536608a0ea0f3df40\"" Jan 20 01:56:48.801012 containerd[1643]: time="2026-01-20T01:56:48.793438988Z" level=info msg="connecting to shim c9bf082ce41898c53b55b552a810d7329bff7ef745b6d94536608a0ea0f3df40" address="unix:///run/containerd/s/9dd5e4039d6f5b3712b1257a02176431d1406dc965366ea78fa428141f709a9a" protocol=ttrpc version=3 Jan 20 01:56:49.053477 containerd[1643]: time="2026-01-20T01:56:49.049777957Z" level=error msg="Failed to destroy network for sandbox \"403f649a73b00ddfaf802c22feda9479e47475047603be008db57c79757fb231\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:49.087566 systemd[1]: run-netns-cni\x2d2b8f118b\x2dcb7e\x2dc7bb\x2d9cff\x2dcfe81d71029f.mount: Deactivated successfully. Jan 20 01:56:49.113485 kubelet[3123]: E0120 01:56:49.108955 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"403f649a73b00ddfaf802c22feda9479e47475047603be008db57c79757fb231\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:49.114317 containerd[1643]: time="2026-01-20T01:56:49.108518222Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hknvv,Uid:05750e4a-a6e9-4631-9a1f-786fc076da7e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"403f649a73b00ddfaf802c22feda9479e47475047603be008db57c79757fb231\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:49.114499 kubelet[3123]: E0120 01:56:49.113455 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"403f649a73b00ddfaf802c22feda9479e47475047603be008db57c79757fb231\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hknvv" Jan 20 01:56:49.114499 kubelet[3123]: E0120 01:56:49.113771 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"403f649a73b00ddfaf802c22feda9479e47475047603be008db57c79757fb231\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hknvv" Jan 20 01:56:49.114499 kubelet[3123]: E0120 01:56:49.114167 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-hknvv_kube-system(05750e4a-a6e9-4631-9a1f-786fc076da7e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-hknvv_kube-system(05750e4a-a6e9-4631-9a1f-786fc076da7e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"403f649a73b00ddfaf802c22feda9479e47475047603be008db57c79757fb231\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-hknvv" podUID="05750e4a-a6e9-4631-9a1f-786fc076da7e" Jan 20 01:56:49.364097 containerd[1643]: time="2026-01-20T01:56:49.353650582Z" level=error msg="Failed to destroy network for sandbox \"2946c3844e18be172a58ecd2205772e672049a36a95ce69b1aeb15981c69d791\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:49.370392 systemd[1]: run-netns-cni\x2d498d6a45\x2d5bab\x2d9661\x2d7f4b\x2dab27ab266fdb.mount: Deactivated successfully. Jan 20 01:56:49.537210 containerd[1643]: time="2026-01-20T01:56:49.518023756Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x6f5h,Uid:eeb09d5e-8a63-4fca-910b-ea49fa1ecf05,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2946c3844e18be172a58ecd2205772e672049a36a95ce69b1aeb15981c69d791\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:49.538545 kubelet[3123]: E0120 01:56:49.522788 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2946c3844e18be172a58ecd2205772e672049a36a95ce69b1aeb15981c69d791\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:49.538545 kubelet[3123]: E0120 01:56:49.524590 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2946c3844e18be172a58ecd2205772e672049a36a95ce69b1aeb15981c69d791\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x6f5h" Jan 20 01:56:49.538545 kubelet[3123]: E0120 01:56:49.524642 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2946c3844e18be172a58ecd2205772e672049a36a95ce69b1aeb15981c69d791\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x6f5h" Jan 20 01:56:49.538919 kubelet[3123]: E0120 01:56:49.524840 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x6f5h_calico-system(eeb09d5e-8a63-4fca-910b-ea49fa1ecf05)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x6f5h_calico-system(eeb09d5e-8a63-4fca-910b-ea49fa1ecf05)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2946c3844e18be172a58ecd2205772e672049a36a95ce69b1aeb15981c69d791\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:56:49.566527 containerd[1643]: time="2026-01-20T01:56:49.566455158Z" level=error msg="Failed to destroy network for sandbox \"e0998b8eaeeb45df27bf800dfd8c5c6270c029a35befb1bb79dcce06ff38deee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:49.606612 systemd[1]: Started cri-containerd-c9bf082ce41898c53b55b552a810d7329bff7ef745b6d94536608a0ea0f3df40.scope - libcontainer container c9bf082ce41898c53b55b552a810d7329bff7ef745b6d94536608a0ea0f3df40. Jan 20 01:56:49.617586 systemd[1]: run-netns-cni\x2dbd6a3e5c\x2d5ed6\x2da4d0\x2d78e6\x2db0dc72d82093.mount: Deactivated successfully. Jan 20 01:56:49.647818 containerd[1643]: time="2026-01-20T01:56:49.643471634Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6455dcb75d-z7fp4,Uid:73120413-751d-4a6a-a82b-54ccc2e8bc99,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0998b8eaeeb45df27bf800dfd8c5c6270c029a35befb1bb79dcce06ff38deee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:49.659811 kubelet[3123]: E0120 01:56:49.656126 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0998b8eaeeb45df27bf800dfd8c5c6270c029a35befb1bb79dcce06ff38deee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:49.659811 kubelet[3123]: E0120 01:56:49.659557 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0998b8eaeeb45df27bf800dfd8c5c6270c029a35befb1bb79dcce06ff38deee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6455dcb75d-z7fp4" Jan 20 01:56:49.659811 kubelet[3123]: E0120 01:56:49.659600 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0998b8eaeeb45df27bf800dfd8c5c6270c029a35befb1bb79dcce06ff38deee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6455dcb75d-z7fp4" Jan 20 01:56:49.686151 kubelet[3123]: E0120 01:56:49.682598 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6455dcb75d-z7fp4_calico-system(73120413-751d-4a6a-a82b-54ccc2e8bc99)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6455dcb75d-z7fp4_calico-system(73120413-751d-4a6a-a82b-54ccc2e8bc99)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0998b8eaeeb45df27bf800dfd8c5c6270c029a35befb1bb79dcce06ff38deee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6455dcb75d-z7fp4" podUID="73120413-751d-4a6a-a82b-54ccc2e8bc99" Jan 20 01:56:49.845263 containerd[1643]: time="2026-01-20T01:56:49.845053327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d7bc7b4f-k5t2j,Uid:68cbc571-4445-4166-912c-8fdfe252aae2,Namespace:calico-system,Attempt:0,}" Jan 20 01:56:54.137963 containerd[1643]: time="2026-01-20T01:56:54.137839955Z" level=error msg="get state for c9bf082ce41898c53b55b552a810d7329bff7ef745b6d94536608a0ea0f3df40" error="context deadline exceeded" Jan 20 01:56:54.207797 containerd[1643]: time="2026-01-20T01:56:54.153789898Z" level=warning msg="unknown status" status=0 Jan 20 01:56:54.587139 containerd[1643]: time="2026-01-20T01:56:54.586879762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-r7ptx,Uid:03f653dd-0210-41e9-9d70-a3905826baa1,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:56:54.648428 containerd[1643]: time="2026-01-20T01:56:54.646942597Z" level=error msg="Failed to destroy network for sandbox \"a0528e70c0f9a7c1a5c0fb1dcc3a5101ae8559ddd3dbb030ed025b04e8f60d77\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:54.720923 systemd[1]: run-netns-cni\x2dcd10cd29\x2d2b0e\x2d17d9\x2df960\x2d7cf9a97b834a.mount: Deactivated successfully. Jan 20 01:56:54.791111 containerd[1643]: time="2026-01-20T01:56:54.790865677Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ft2sl,Uid:dce8f61b-70a0-47ff-b7a3-9a49a15c7261,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0528e70c0f9a7c1a5c0fb1dcc3a5101ae8559ddd3dbb030ed025b04e8f60d77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:54.818380 kubelet[3123]: E0120 01:56:54.814566 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0528e70c0f9a7c1a5c0fb1dcc3a5101ae8559ddd3dbb030ed025b04e8f60d77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:54.822148 kubelet[3123]: E0120 01:56:54.814798 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0528e70c0f9a7c1a5c0fb1dcc3a5101ae8559ddd3dbb030ed025b04e8f60d77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ft2sl" Jan 20 01:56:54.822148 kubelet[3123]: E0120 01:56:54.821849 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0528e70c0f9a7c1a5c0fb1dcc3a5101ae8559ddd3dbb030ed025b04e8f60d77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ft2sl" Jan 20 01:56:54.828620 containerd[1643]: time="2026-01-20T01:56:54.825508367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-wbvft,Uid:589f656f-1e0a-4667-bc0d-42908aab3340,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:56:54.853918 kubelet[3123]: E0120 01:56:54.851838 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-ft2sl_kube-system(dce8f61b-70a0-47ff-b7a3-9a49a15c7261)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-ft2sl_kube-system(dce8f61b-70a0-47ff-b7a3-9a49a15c7261)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a0528e70c0f9a7c1a5c0fb1dcc3a5101ae8559ddd3dbb030ed025b04e8f60d77\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-ft2sl" podUID="dce8f61b-70a0-47ff-b7a3-9a49a15c7261" Jan 20 01:56:55.472539 kernel: audit: type=1334 audit(1768874215.345:615): prog-id=182 op=LOAD Jan 20 01:56:55.472810 kernel: audit: type=1300 audit(1768874215.345:615): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3731 pid=6871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:56:55.345000 audit: BPF prog-id=182 op=LOAD Jan 20 01:56:55.345000 audit[6871]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3731 pid=6871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:56:55.345000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339626630383263653431383938633533623535623535326138313064 Jan 20 01:56:55.568989 kernel: audit: type=1327 audit(1768874215.345:615): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339626630383263653431383938633533623535623535326138313064 Jan 20 01:56:55.569205 kernel: audit: type=1334 audit(1768874215.346:616): prog-id=183 op=LOAD Jan 20 01:56:55.346000 audit: BPF prog-id=183 op=LOAD Jan 20 01:56:55.584316 kernel: audit: type=1300 audit(1768874215.346:616): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3731 pid=6871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:56:55.346000 audit[6871]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3731 pid=6871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:56:55.346000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339626630383263653431383938633533623535623535326138313064 Jan 20 01:56:55.672297 kernel: audit: type=1327 audit(1768874215.346:616): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339626630383263653431383938633533623535623535326138313064 Jan 20 01:56:55.346000 audit: BPF prog-id=183 op=UNLOAD Jan 20 01:56:55.704834 kernel: audit: type=1334 audit(1768874215.346:617): prog-id=183 op=UNLOAD Jan 20 01:56:55.704995 kernel: audit: type=1300 audit(1768874215.346:617): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3731 pid=6871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:56:55.346000 audit[6871]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3731 pid=6871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:56:55.761362 kernel: audit: type=1327 audit(1768874215.346:617): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339626630383263653431383938633533623535623535326138313064 Jan 20 01:56:55.346000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339626630383263653431383938633533623535623535326138313064 Jan 20 01:56:55.346000 audit: BPF prog-id=182 op=UNLOAD Jan 20 01:56:55.780368 kernel: audit: type=1334 audit(1768874215.346:618): prog-id=182 op=UNLOAD Jan 20 01:56:55.346000 audit[6871]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3731 pid=6871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:56:55.346000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339626630383263653431383938633533623535623535326138313064 Jan 20 01:56:55.798137 containerd[1643]: time="2026-01-20T01:56:55.792037055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-szpvj,Uid:285811f9-e547-431f-a7b0-90e1226d2f4d,Namespace:calico-system,Attempt:0,}" Jan 20 01:56:55.346000 audit: BPF prog-id=184 op=LOAD Jan 20 01:56:55.346000 audit[6871]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3731 pid=6871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:56:55.346000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339626630383263653431383938633533623535623535326138313064 Jan 20 01:56:55.878292 containerd[1643]: time="2026-01-20T01:56:55.864002962Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 20 01:56:56.324898 containerd[1643]: time="2026-01-20T01:56:56.324822238Z" level=error msg="Failed to destroy network for sandbox \"21ca2a4cc5a4ab6ab534c1b50b707d4cd764663fff22be1616672cbbde8f422e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:56.380751 systemd[1]: run-netns-cni\x2d715dfb1a\x2d8e4a\x2dd917\x2de1ab\x2dda174d5dc498.mount: Deactivated successfully. Jan 20 01:56:56.509216 containerd[1643]: time="2026-01-20T01:56:56.508983387Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-r7ptx,Uid:03f653dd-0210-41e9-9d70-a3905826baa1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"21ca2a4cc5a4ab6ab534c1b50b707d4cd764663fff22be1616672cbbde8f422e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:56.516453 kubelet[3123]: E0120 01:56:56.515604 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21ca2a4cc5a4ab6ab534c1b50b707d4cd764663fff22be1616672cbbde8f422e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:56.534427 kubelet[3123]: E0120 01:56:56.526901 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21ca2a4cc5a4ab6ab534c1b50b707d4cd764663fff22be1616672cbbde8f422e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" Jan 20 01:56:56.534427 kubelet[3123]: E0120 01:56:56.526968 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21ca2a4cc5a4ab6ab534c1b50b707d4cd764663fff22be1616672cbbde8f422e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" Jan 20 01:56:56.540791 kubelet[3123]: E0120 01:56:56.537323 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74b798b596-r7ptx_calico-apiserver(03f653dd-0210-41e9-9d70-a3905826baa1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74b798b596-r7ptx_calico-apiserver(03f653dd-0210-41e9-9d70-a3905826baa1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21ca2a4cc5a4ab6ab534c1b50b707d4cd764663fff22be1616672cbbde8f422e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 01:56:56.633526 containerd[1643]: time="2026-01-20T01:56:56.621136088Z" level=error msg="Failed to destroy network for sandbox \"72912ceaf6300cb95d28b128091e41d59178829a73673ccc77a44e7f13e528b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:56.666916 systemd[1]: run-netns-cni\x2d4bf7bce7\x2daeba\x2dcc67\x2dc41b\x2dedb594c01f81.mount: Deactivated successfully. Jan 20 01:56:56.872461 containerd[1643]: time="2026-01-20T01:56:56.830968662Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d7bc7b4f-k5t2j,Uid:68cbc571-4445-4166-912c-8fdfe252aae2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"72912ceaf6300cb95d28b128091e41d59178829a73673ccc77a44e7f13e528b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:56.873413 kubelet[3123]: E0120 01:56:56.844181 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72912ceaf6300cb95d28b128091e41d59178829a73673ccc77a44e7f13e528b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:56.873413 kubelet[3123]: E0120 01:56:56.846010 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72912ceaf6300cb95d28b128091e41d59178829a73673ccc77a44e7f13e528b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" Jan 20 01:56:56.873413 kubelet[3123]: E0120 01:56:56.860219 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72912ceaf6300cb95d28b128091e41d59178829a73673ccc77a44e7f13e528b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" Jan 20 01:56:56.873589 kubelet[3123]: E0120 01:56:56.860924 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86d7bc7b4f-k5t2j_calico-system(68cbc571-4445-4166-912c-8fdfe252aae2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86d7bc7b4f-k5t2j_calico-system(68cbc571-4445-4166-912c-8fdfe252aae2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72912ceaf6300cb95d28b128091e41d59178829a73673ccc77a44e7f13e528b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 01:56:56.907332 containerd[1643]: time="2026-01-20T01:56:56.904396228Z" level=info msg="StartContainer for \"c9bf082ce41898c53b55b552a810d7329bff7ef745b6d94536608a0ea0f3df40\" returns successfully" Jan 20 01:56:57.470197 containerd[1643]: time="2026-01-20T01:56:57.427060736Z" level=error msg="Failed to destroy network for sandbox \"a2100a9169ea10cf320b6e9f4fb5ec94ed482670a5a76b1ceef3ec0e40f8f5a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:57.522048 systemd[1]: run-netns-cni\x2db6b8ce51\x2d56a9\x2d21d4\x2d9857\x2d1938c1507246.mount: Deactivated successfully. Jan 20 01:57:06.449164 containerd[1643]: time="2026-01-20T01:56:59.558532223Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-wbvft,Uid:589f656f-1e0a-4667-bc0d-42908aab3340,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2100a9169ea10cf320b6e9f4fb5ec94ed482670a5a76b1ceef3ec0e40f8f5a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:08.483110 kubelet[3123]: E0120 01:57:08.481052 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2100a9169ea10cf320b6e9f4fb5ec94ed482670a5a76b1ceef3ec0e40f8f5a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:08.483110 kubelet[3123]: E0120 01:57:08.481214 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2100a9169ea10cf320b6e9f4fb5ec94ed482670a5a76b1ceef3ec0e40f8f5a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" Jan 20 01:57:08.483110 kubelet[3123]: E0120 01:57:08.481251 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2100a9169ea10cf320b6e9f4fb5ec94ed482670a5a76b1ceef3ec0e40f8f5a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" Jan 20 01:57:08.484402 kubelet[3123]: E0120 01:57:08.482892 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74b798b596-wbvft_calico-apiserver(589f656f-1e0a-4667-bc0d-42908aab3340)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74b798b596-wbvft_calico-apiserver(589f656f-1e0a-4667-bc0d-42908aab3340)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2100a9169ea10cf320b6e9f4fb5ec94ed482670a5a76b1ceef3ec0e40f8f5a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 01:57:08.655836 kubelet[3123]: E0120 01:57:08.649280 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:57:08.656566 kubelet[3123]: E0120 01:57:08.656472 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:57:08.675319 containerd[1643]: time="2026-01-20T01:57:08.667099770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x6f5h,Uid:eeb09d5e-8a63-4fca-910b-ea49fa1ecf05,Namespace:calico-system,Attempt:0,}" Jan 20 01:57:08.904114 containerd[1643]: time="2026-01-20T01:57:08.903820464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6455dcb75d-z7fp4,Uid:73120413-751d-4a6a-a82b-54ccc2e8bc99,Namespace:calico-system,Attempt:0,}" Jan 20 01:57:08.923226 containerd[1643]: time="2026-01-20T01:57:08.923153428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hknvv,Uid:05750e4a-a6e9-4631-9a1f-786fc076da7e,Namespace:kube-system,Attempt:0,}" Jan 20 01:57:09.341838 kubelet[3123]: E0120 01:57:09.337134 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:57:09.364871 containerd[1643]: time="2026-01-20T01:57:09.363269317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-r7ptx,Uid:03f653dd-0210-41e9-9d70-a3905826baa1,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:57:09.413786 containerd[1643]: time="2026-01-20T01:57:09.412219756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ft2sl,Uid:dce8f61b-70a0-47ff-b7a3-9a49a15c7261,Namespace:kube-system,Attempt:0,}" Jan 20 01:57:09.464838 kubelet[3123]: E0120 01:57:09.463124 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:57:09.991105 containerd[1643]: time="2026-01-20T01:57:09.959508242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d7bc7b4f-k5t2j,Uid:68cbc571-4445-4166-912c-8fdfe252aae2,Namespace:calico-system,Attempt:0,}" Jan 20 01:57:10.502976 kubelet[3123]: I0120 01:57:10.502882 3123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-s9z5p" podStartSLOduration=27.669552798 podStartE2EDuration="4m57.499665457s" podCreationTimestamp="2026-01-20 01:52:13 +0000 UTC" firstStartedPulling="2026-01-20 01:52:17.565882594 +0000 UTC m=+152.014331300" lastFinishedPulling="2026-01-20 01:56:47.395995124 +0000 UTC m=+421.844443959" observedRunningTime="2026-01-20 01:57:10.45350392 +0000 UTC m=+444.901952635" watchObservedRunningTime="2026-01-20 01:57:10.499665457 +0000 UTC m=+444.948114183" Jan 20 01:57:11.899152 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 20 01:57:11.899474 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 20 01:57:12.329780 kubelet[3123]: E0120 01:57:12.315175 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:57:12.725327 containerd[1643]: time="2026-01-20T01:57:12.720990578Z" level=error msg="Failed to destroy network for sandbox \"2bc5adee8e262f9e24f83ec8311c4ae61f87599f7aa30ffd37285cf8ed62ed6e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:12.766454 systemd[1]: run-netns-cni\x2db9db4285\x2dd1cf\x2d9567\x2d2bde\x2d8af5a41ce7a8.mount: Deactivated successfully. Jan 20 01:57:13.115863 containerd[1643]: time="2026-01-20T01:57:13.113950562Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-szpvj,Uid:285811f9-e547-431f-a7b0-90e1226d2f4d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bc5adee8e262f9e24f83ec8311c4ae61f87599f7aa30ffd37285cf8ed62ed6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:13.123577 kubelet[3123]: E0120 01:57:13.123515 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bc5adee8e262f9e24f83ec8311c4ae61f87599f7aa30ffd37285cf8ed62ed6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:13.133431 kubelet[3123]: E0120 01:57:13.125067 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bc5adee8e262f9e24f83ec8311c4ae61f87599f7aa30ffd37285cf8ed62ed6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-szpvj" Jan 20 01:57:13.133431 kubelet[3123]: E0120 01:57:13.125122 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bc5adee8e262f9e24f83ec8311c4ae61f87599f7aa30ffd37285cf8ed62ed6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-szpvj" Jan 20 01:57:13.133431 kubelet[3123]: E0120 01:57:13.125209 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-szpvj_calico-system(285811f9-e547-431f-a7b0-90e1226d2f4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-szpvj_calico-system(285811f9-e547-431f-a7b0-90e1226d2f4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2bc5adee8e262f9e24f83ec8311c4ae61f87599f7aa30ffd37285cf8ed62ed6e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 01:57:16.260953 containerd[1643]: time="2026-01-20T01:57:16.253431406Z" level=info msg="container event discarded" container=f5c1e5bcfe8f842b6812b54428af21037ef0d11d08ca26babd0661209f171f99 type=CONTAINER_CREATED_EVENT Jan 20 01:57:16.263548 containerd[1643]: time="2026-01-20T01:57:16.262122135Z" level=info msg="container event discarded" container=f5c1e5bcfe8f842b6812b54428af21037ef0d11d08ca26babd0661209f171f99 type=CONTAINER_STARTED_EVENT Jan 20 01:57:17.569551 containerd[1643]: time="2026-01-20T01:57:17.569442630Z" level=info msg="container event discarded" container=90ef88d76c0030b947585dd9d3d8ec7e21848368d8abdfa9e4690c0429bf7b2a type=CONTAINER_CREATED_EVENT Jan 20 01:57:17.569551 containerd[1643]: time="2026-01-20T01:57:17.569506541Z" level=info msg="container event discarded" container=90ef88d76c0030b947585dd9d3d8ec7e21848368d8abdfa9e4690c0429bf7b2a type=CONTAINER_STARTED_EVENT Jan 20 01:57:19.135318 containerd[1643]: 2026-01-20 01:57:16.520 [INFO][7239] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="13d073c24dc10d7d7f673fa1f527c03a0153d6dd32af070829d6ae085313bf2a" Jan 20 01:57:19.135318 containerd[1643]: 2026-01-20 01:57:16.521 [INFO][7239] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="13d073c24dc10d7d7f673fa1f527c03a0153d6dd32af070829d6ae085313bf2a" iface="eth0" netns="/var/run/netns/cni-ed031ca9-a786-4c38-f4ba-9d28fb6874a2" Jan 20 01:57:19.135318 containerd[1643]: 2026-01-20 01:57:16.523 [INFO][7239] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="13d073c24dc10d7d7f673fa1f527c03a0153d6dd32af070829d6ae085313bf2a" iface="eth0" netns="/var/run/netns/cni-ed031ca9-a786-4c38-f4ba-9d28fb6874a2" Jan 20 01:57:19.135318 containerd[1643]: 2026-01-20 01:57:16.527 [INFO][7239] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="13d073c24dc10d7d7f673fa1f527c03a0153d6dd32af070829d6ae085313bf2a" iface="eth0" netns="/var/run/netns/cni-ed031ca9-a786-4c38-f4ba-9d28fb6874a2" Jan 20 01:57:19.135318 containerd[1643]: 2026-01-20 01:57:16.531 [INFO][7239] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="13d073c24dc10d7d7f673fa1f527c03a0153d6dd32af070829d6ae085313bf2a" Jan 20 01:57:19.135318 containerd[1643]: 2026-01-20 01:57:16.531 [INFO][7239] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="13d073c24dc10d7d7f673fa1f527c03a0153d6dd32af070829d6ae085313bf2a" Jan 20 01:57:19.135318 containerd[1643]: 2026-01-20 01:57:18.245 [INFO][7305] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="13d073c24dc10d7d7f673fa1f527c03a0153d6dd32af070829d6ae085313bf2a" HandleID="k8s-pod-network.13d073c24dc10d7d7f673fa1f527c03a0153d6dd32af070829d6ae085313bf2a" Workload="localhost-k8s-calico--apiserver--74b798b596--r7ptx-eth0" Jan 20 01:57:19.135318 containerd[1643]: 2026-01-20 01:57:18.273 [INFO][7305] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:57:19.135318 containerd[1643]: 2026-01-20 01:57:18.288 [INFO][7305] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:57:19.142268 containerd[1643]: 2026-01-20 01:57:18.822 [WARNING][7305] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="13d073c24dc10d7d7f673fa1f527c03a0153d6dd32af070829d6ae085313bf2a" HandleID="k8s-pod-network.13d073c24dc10d7d7f673fa1f527c03a0153d6dd32af070829d6ae085313bf2a" Workload="localhost-k8s-calico--apiserver--74b798b596--r7ptx-eth0" Jan 20 01:57:19.142268 containerd[1643]: 2026-01-20 01:57:18.822 [INFO][7305] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="13d073c24dc10d7d7f673fa1f527c03a0153d6dd32af070829d6ae085313bf2a" HandleID="k8s-pod-network.13d073c24dc10d7d7f673fa1f527c03a0153d6dd32af070829d6ae085313bf2a" Workload="localhost-k8s-calico--apiserver--74b798b596--r7ptx-eth0" Jan 20 01:57:19.142268 containerd[1643]: 2026-01-20 01:57:18.945 [INFO][7305] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:57:19.142268 containerd[1643]: 2026-01-20 01:57:19.087 [INFO][7239] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="13d073c24dc10d7d7f673fa1f527c03a0153d6dd32af070829d6ae085313bf2a" Jan 20 01:57:19.192218 systemd[1]: run-netns-cni\x2ded031ca9\x2da786\x2d4c38\x2df4ba\x2d9d28fb6874a2.mount: Deactivated successfully. Jan 20 01:57:19.265377 containerd[1643]: time="2026-01-20T01:57:19.265124786Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-r7ptx,Uid:03f653dd-0210-41e9-9d70-a3905826baa1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"13d073c24dc10d7d7f673fa1f527c03a0153d6dd32af070829d6ae085313bf2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:19.290299 kubelet[3123]: E0120 01:57:19.290226 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13d073c24dc10d7d7f673fa1f527c03a0153d6dd32af070829d6ae085313bf2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:19.305326 kubelet[3123]: E0120 01:57:19.305254 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13d073c24dc10d7d7f673fa1f527c03a0153d6dd32af070829d6ae085313bf2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" Jan 20 01:57:19.308425 kubelet[3123]: E0120 01:57:19.308344 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13d073c24dc10d7d7f673fa1f527c03a0153d6dd32af070829d6ae085313bf2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" Jan 20 01:57:19.309120 kubelet[3123]: E0120 01:57:19.308645 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74b798b596-r7ptx_calico-apiserver(03f653dd-0210-41e9-9d70-a3905826baa1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74b798b596-r7ptx_calico-apiserver(03f653dd-0210-41e9-9d70-a3905826baa1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"13d073c24dc10d7d7f673fa1f527c03a0153d6dd32af070829d6ae085313bf2a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 01:57:19.730356 containerd[1643]: 2026-01-20 01:57:16.659 [INFO][7254] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8b1163a8d14415083f5eb72c73def877afe545c1e9b1d040ca3d6f6be31826ba" Jan 20 01:57:19.730356 containerd[1643]: 2026-01-20 01:57:16.659 [INFO][7254] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8b1163a8d14415083f5eb72c73def877afe545c1e9b1d040ca3d6f6be31826ba" iface="eth0" netns="/var/run/netns/cni-dc8ae721-c1d8-d899-6084-89bdb895686a" Jan 20 01:57:19.730356 containerd[1643]: 2026-01-20 01:57:16.659 [INFO][7254] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8b1163a8d14415083f5eb72c73def877afe545c1e9b1d040ca3d6f6be31826ba" iface="eth0" netns="/var/run/netns/cni-dc8ae721-c1d8-d899-6084-89bdb895686a" Jan 20 01:57:19.730356 containerd[1643]: 2026-01-20 01:57:16.660 [INFO][7254] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8b1163a8d14415083f5eb72c73def877afe545c1e9b1d040ca3d6f6be31826ba" iface="eth0" netns="/var/run/netns/cni-dc8ae721-c1d8-d899-6084-89bdb895686a" Jan 20 01:57:19.730356 containerd[1643]: 2026-01-20 01:57:16.660 [INFO][7254] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8b1163a8d14415083f5eb72c73def877afe545c1e9b1d040ca3d6f6be31826ba" Jan 20 01:57:19.730356 containerd[1643]: 2026-01-20 01:57:16.660 [INFO][7254] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b1163a8d14415083f5eb72c73def877afe545c1e9b1d040ca3d6f6be31826ba" Jan 20 01:57:19.730356 containerd[1643]: 2026-01-20 01:57:18.319 [INFO][7324] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8b1163a8d14415083f5eb72c73def877afe545c1e9b1d040ca3d6f6be31826ba" HandleID="k8s-pod-network.8b1163a8d14415083f5eb72c73def877afe545c1e9b1d040ca3d6f6be31826ba" Workload="localhost-k8s-coredns--674b8bbfcf--hknvv-eth0" Jan 20 01:57:19.730356 containerd[1643]: 2026-01-20 01:57:18.337 [INFO][7324] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:57:19.730356 containerd[1643]: 2026-01-20 01:57:19.024 [INFO][7324] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:57:19.754418 containerd[1643]: 2026-01-20 01:57:19.291 [WARNING][7324] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8b1163a8d14415083f5eb72c73def877afe545c1e9b1d040ca3d6f6be31826ba" HandleID="k8s-pod-network.8b1163a8d14415083f5eb72c73def877afe545c1e9b1d040ca3d6f6be31826ba" Workload="localhost-k8s-coredns--674b8bbfcf--hknvv-eth0" Jan 20 01:57:19.754418 containerd[1643]: 2026-01-20 01:57:19.291 [INFO][7324] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8b1163a8d14415083f5eb72c73def877afe545c1e9b1d040ca3d6f6be31826ba" HandleID="k8s-pod-network.8b1163a8d14415083f5eb72c73def877afe545c1e9b1d040ca3d6f6be31826ba" Workload="localhost-k8s-coredns--674b8bbfcf--hknvv-eth0" Jan 20 01:57:19.754418 containerd[1643]: 2026-01-20 01:57:19.390 [INFO][7324] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:57:19.754418 containerd[1643]: 2026-01-20 01:57:19.582 [INFO][7254] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8b1163a8d14415083f5eb72c73def877afe545c1e9b1d040ca3d6f6be31826ba" Jan 20 01:57:19.778306 systemd[1]: run-netns-cni\x2ddc8ae721\x2dc1d8\x2dd899\x2d6084\x2d89bdb895686a.mount: Deactivated successfully. Jan 20 01:57:19.862100 kubelet[3123]: E0120 01:57:19.855515 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:57:19.944295 kubelet[3123]: E0120 01:57:19.893445 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:57:19.946258 containerd[1643]: time="2026-01-20T01:57:19.932484599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-wbvft,Uid:589f656f-1e0a-4667-bc0d-42908aab3340,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:57:20.159481 containerd[1643]: time="2026-01-20T01:57:20.104621317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-r7ptx,Uid:03f653dd-0210-41e9-9d70-a3905826baa1,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:57:20.200060 containerd[1643]: time="2026-01-20T01:57:20.199846538Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hknvv,Uid:05750e4a-a6e9-4631-9a1f-786fc076da7e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b1163a8d14415083f5eb72c73def877afe545c1e9b1d040ca3d6f6be31826ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:20.200442 kubelet[3123]: E0120 01:57:20.200288 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b1163a8d14415083f5eb72c73def877afe545c1e9b1d040ca3d6f6be31826ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:20.200442 kubelet[3123]: E0120 01:57:20.200369 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b1163a8d14415083f5eb72c73def877afe545c1e9b1d040ca3d6f6be31826ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hknvv" Jan 20 01:57:20.200442 kubelet[3123]: E0120 01:57:20.200399 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b1163a8d14415083f5eb72c73def877afe545c1e9b1d040ca3d6f6be31826ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hknvv" Jan 20 01:57:20.205074 kubelet[3123]: E0120 01:57:20.200763 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-hknvv_kube-system(05750e4a-a6e9-4631-9a1f-786fc076da7e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-hknvv_kube-system(05750e4a-a6e9-4631-9a1f-786fc076da7e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b1163a8d14415083f5eb72c73def877afe545c1e9b1d040ca3d6f6be31826ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-hknvv" podUID="05750e4a-a6e9-4631-9a1f-786fc076da7e" Jan 20 01:57:20.621194 containerd[1643]: 2026-01-20 01:57:16.456 [INFO][7198] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="675627a763760802b254fb44fc746f29e507d25a56eba5a804eb901d0624d360" Jan 20 01:57:20.621194 containerd[1643]: 2026-01-20 01:57:16.456 [INFO][7198] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="675627a763760802b254fb44fc746f29e507d25a56eba5a804eb901d0624d360" iface="eth0" netns="/var/run/netns/cni-fbcc814e-426d-1ff8-10d7-89b100fdf712" Jan 20 01:57:20.621194 containerd[1643]: 2026-01-20 01:57:16.494 [INFO][7198] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="675627a763760802b254fb44fc746f29e507d25a56eba5a804eb901d0624d360" iface="eth0" netns="/var/run/netns/cni-fbcc814e-426d-1ff8-10d7-89b100fdf712" Jan 20 01:57:20.621194 containerd[1643]: 2026-01-20 01:57:16.495 [INFO][7198] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="675627a763760802b254fb44fc746f29e507d25a56eba5a804eb901d0624d360" iface="eth0" netns="/var/run/netns/cni-fbcc814e-426d-1ff8-10d7-89b100fdf712" Jan 20 01:57:20.621194 containerd[1643]: 2026-01-20 01:57:16.495 [INFO][7198] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="675627a763760802b254fb44fc746f29e507d25a56eba5a804eb901d0624d360" Jan 20 01:57:20.621194 containerd[1643]: 2026-01-20 01:57:16.495 [INFO][7198] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="675627a763760802b254fb44fc746f29e507d25a56eba5a804eb901d0624d360" Jan 20 01:57:20.621194 containerd[1643]: 2026-01-20 01:57:18.711 [INFO][7303] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="675627a763760802b254fb44fc746f29e507d25a56eba5a804eb901d0624d360" HandleID="k8s-pod-network.675627a763760802b254fb44fc746f29e507d25a56eba5a804eb901d0624d360" Workload="localhost-k8s-coredns--674b8bbfcf--ft2sl-eth0" Jan 20 01:57:20.621194 containerd[1643]: 2026-01-20 01:57:18.742 [INFO][7303] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:57:20.621194 containerd[1643]: 2026-01-20 01:57:19.401 [INFO][7303] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:57:20.621791 containerd[1643]: 2026-01-20 01:57:19.673 [WARNING][7303] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="675627a763760802b254fb44fc746f29e507d25a56eba5a804eb901d0624d360" HandleID="k8s-pod-network.675627a763760802b254fb44fc746f29e507d25a56eba5a804eb901d0624d360" Workload="localhost-k8s-coredns--674b8bbfcf--ft2sl-eth0" Jan 20 01:57:20.621791 containerd[1643]: 2026-01-20 01:57:19.674 [INFO][7303] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="675627a763760802b254fb44fc746f29e507d25a56eba5a804eb901d0624d360" HandleID="k8s-pod-network.675627a763760802b254fb44fc746f29e507d25a56eba5a804eb901d0624d360" Workload="localhost-k8s-coredns--674b8bbfcf--ft2sl-eth0" Jan 20 01:57:20.621791 containerd[1643]: 2026-01-20 01:57:19.713 [INFO][7303] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:57:20.621791 containerd[1643]: 2026-01-20 01:57:20.106 [INFO][7198] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="675627a763760802b254fb44fc746f29e507d25a56eba5a804eb901d0624d360" Jan 20 01:57:20.639533 systemd[1]: run-netns-cni\x2dfbcc814e\x2d426d\x2d1ff8\x2d10d7\x2d89b100fdf712.mount: Deactivated successfully. Jan 20 01:57:20.827924 containerd[1643]: 2026-01-20 01:57:16.665 [INFO][7280] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0e0f95fa1314ecc5456cf8100046d1297452480a838e8c96376b2144959b0002" Jan 20 01:57:20.827924 containerd[1643]: 2026-01-20 01:57:16.665 [INFO][7280] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0e0f95fa1314ecc5456cf8100046d1297452480a838e8c96376b2144959b0002" iface="eth0" netns="/var/run/netns/cni-76186d62-a05f-94b5-86b3-f8afe91faf0e" Jan 20 01:57:20.827924 containerd[1643]: 2026-01-20 01:57:16.665 [INFO][7280] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0e0f95fa1314ecc5456cf8100046d1297452480a838e8c96376b2144959b0002" iface="eth0" netns="/var/run/netns/cni-76186d62-a05f-94b5-86b3-f8afe91faf0e" Jan 20 01:57:20.827924 containerd[1643]: 2026-01-20 01:57:16.666 [INFO][7280] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0e0f95fa1314ecc5456cf8100046d1297452480a838e8c96376b2144959b0002" iface="eth0" netns="/var/run/netns/cni-76186d62-a05f-94b5-86b3-f8afe91faf0e" Jan 20 01:57:20.827924 containerd[1643]: 2026-01-20 01:57:16.666 [INFO][7280] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0e0f95fa1314ecc5456cf8100046d1297452480a838e8c96376b2144959b0002" Jan 20 01:57:20.827924 containerd[1643]: 2026-01-20 01:57:16.666 [INFO][7280] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e0f95fa1314ecc5456cf8100046d1297452480a838e8c96376b2144959b0002" Jan 20 01:57:20.827924 containerd[1643]: 2026-01-20 01:57:18.870 [INFO][7325] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0e0f95fa1314ecc5456cf8100046d1297452480a838e8c96376b2144959b0002" HandleID="k8s-pod-network.0e0f95fa1314ecc5456cf8100046d1297452480a838e8c96376b2144959b0002" Workload="localhost-k8s-csi--node--driver--x6f5h-eth0" Jan 20 01:57:20.827924 containerd[1643]: 2026-01-20 01:57:18.880 [INFO][7325] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:57:20.827924 containerd[1643]: 2026-01-20 01:57:19.714 [INFO][7325] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:57:20.851377 containerd[1643]: 2026-01-20 01:57:20.097 [WARNING][7325] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0e0f95fa1314ecc5456cf8100046d1297452480a838e8c96376b2144959b0002" HandleID="k8s-pod-network.0e0f95fa1314ecc5456cf8100046d1297452480a838e8c96376b2144959b0002" Workload="localhost-k8s-csi--node--driver--x6f5h-eth0" Jan 20 01:57:20.851377 containerd[1643]: 2026-01-20 01:57:20.098 [INFO][7325] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0e0f95fa1314ecc5456cf8100046d1297452480a838e8c96376b2144959b0002" HandleID="k8s-pod-network.0e0f95fa1314ecc5456cf8100046d1297452480a838e8c96376b2144959b0002" Workload="localhost-k8s-csi--node--driver--x6f5h-eth0" Jan 20 01:57:20.851377 containerd[1643]: 2026-01-20 01:57:20.492 [INFO][7325] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:57:20.851377 containerd[1643]: 2026-01-20 01:57:20.622 [INFO][7280] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0e0f95fa1314ecc5456cf8100046d1297452480a838e8c96376b2144959b0002" Jan 20 01:57:20.909636 containerd[1643]: time="2026-01-20T01:57:20.850618357Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ft2sl,Uid:dce8f61b-70a0-47ff-b7a3-9a49a15c7261,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"675627a763760802b254fb44fc746f29e507d25a56eba5a804eb901d0624d360\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:20.964823 kubelet[3123]: E0120 01:57:20.911155 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"675627a763760802b254fb44fc746f29e507d25a56eba5a804eb901d0624d360\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:20.964823 kubelet[3123]: E0120 01:57:20.911241 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"675627a763760802b254fb44fc746f29e507d25a56eba5a804eb901d0624d360\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ft2sl" Jan 20 01:57:20.964823 kubelet[3123]: E0120 01:57:20.911278 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"675627a763760802b254fb44fc746f29e507d25a56eba5a804eb901d0624d360\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ft2sl" Jan 20 01:57:20.967489 kubelet[3123]: E0120 01:57:20.911344 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-ft2sl_kube-system(dce8f61b-70a0-47ff-b7a3-9a49a15c7261)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-ft2sl_kube-system(dce8f61b-70a0-47ff-b7a3-9a49a15c7261)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"675627a763760802b254fb44fc746f29e507d25a56eba5a804eb901d0624d360\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-ft2sl" podUID="dce8f61b-70a0-47ff-b7a3-9a49a15c7261" Jan 20 01:57:20.973901 systemd[1]: run-netns-cni\x2d76186d62\x2da05f\x2d94b5\x2d86b3\x2df8afe91faf0e.mount: Deactivated successfully. Jan 20 01:57:21.030920 systemd[1]: Started sshd@9-10.0.0.44:22-10.0.0.1:41756.service - OpenSSH per-connection server daemon (10.0.0.1:41756). Jan 20 01:57:21.038171 containerd[1643]: time="2026-01-20T01:57:21.034602157Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x6f5h,Uid:eeb09d5e-8a63-4fca-910b-ea49fa1ecf05,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e0f95fa1314ecc5456cf8100046d1297452480a838e8c96376b2144959b0002\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:21.174589 kernel: kauditd_printk_skb: 5 callbacks suppressed Jan 20 01:57:21.192503 kernel: audit: type=1130 audit(1768874241.028:620): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.44:22-10.0.0.1:41756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:57:21.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.44:22-10.0.0.1:41756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:57:21.193148 kubelet[3123]: E0120 01:57:21.127845 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e0f95fa1314ecc5456cf8100046d1297452480a838e8c96376b2144959b0002\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:21.193148 kubelet[3123]: E0120 01:57:21.128022 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e0f95fa1314ecc5456cf8100046d1297452480a838e8c96376b2144959b0002\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x6f5h" Jan 20 01:57:21.193148 kubelet[3123]: E0120 01:57:21.128070 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e0f95fa1314ecc5456cf8100046d1297452480a838e8c96376b2144959b0002\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x6f5h" Jan 20 01:57:21.193381 kubelet[3123]: E0120 01:57:21.128162 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x6f5h_calico-system(eeb09d5e-8a63-4fca-910b-ea49fa1ecf05)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x6f5h_calico-system(eeb09d5e-8a63-4fca-910b-ea49fa1ecf05)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e0f95fa1314ecc5456cf8100046d1297452480a838e8c96376b2144959b0002\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:57:21.716810 kubelet[3123]: E0120 01:57:21.690222 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:57:21.716810 kubelet[3123]: E0120 01:57:21.700181 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:57:21.717588 containerd[1643]: time="2026-01-20T01:57:21.717536185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hknvv,Uid:05750e4a-a6e9-4631-9a1f-786fc076da7e,Namespace:kube-system,Attempt:0,}" Jan 20 01:57:21.742826 containerd[1643]: time="2026-01-20T01:57:21.742766400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ft2sl,Uid:dce8f61b-70a0-47ff-b7a3-9a49a15c7261,Namespace:kube-system,Attempt:0,}" Jan 20 01:57:22.894861 kernel: audit: type=1101 audit(1768874242.797:621): pid=7400 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:57:22.797000 audit[7400]: USER_ACCT pid=7400 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:57:22.899579 containerd[1643]: time="2026-01-20T01:57:22.869774078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x6f5h,Uid:eeb09d5e-8a63-4fca-910b-ea49fa1ecf05,Namespace:calico-system,Attempt:0,}" Jan 20 01:57:22.843142 sshd-session[7400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:57:22.900823 sshd[7400]: Accepted publickey for core from 10.0.0.1 port 41756 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 01:57:22.825000 audit[7400]: CRED_ACQ pid=7400 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:57:22.940348 systemd-logind[1623]: New session 10 of user core. Jan 20 01:57:23.001208 kernel: audit: type=1103 audit(1768874242.825:622): pid=7400 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:57:23.001302 kernel: audit: type=1006 audit(1768874242.825:623): pid=7400 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 20 01:57:22.976923 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 01:57:22.825000 audit[7400]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffed1d33820 a2=3 a3=0 items=0 ppid=1 pid=7400 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:23.135797 kernel: audit: type=1300 audit(1768874242.825:623): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffed1d33820 a2=3 a3=0 items=0 ppid=1 pid=7400 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:23.135971 kernel: audit: type=1327 audit(1768874242.825:623): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:57:23.136029 kernel: audit: type=1105 audit(1768874243.012:624): pid=7400 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:57:22.825000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:57:23.012000 audit[7400]: USER_START pid=7400 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:57:23.307953 kernel: audit: type=1103 audit(1768874243.067:625): pid=7434 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:57:23.067000 audit[7434]: CRED_ACQ pid=7434 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:57:25.371548 systemd-networkd[1534]: calice5c1d73743: Link UP Jan 20 01:57:25.374478 systemd-networkd[1534]: calice5c1d73743: Gained carrier Jan 20 01:57:25.426009 sshd[7434]: Connection closed by 10.0.0.1 port 41756 Jan 20 01:57:25.479034 kernel: audit: type=1106 audit(1768874245.426:626): pid=7400 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:57:25.426000 audit[7400]: USER_END pid=7400 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:57:25.409290 sshd-session[7400]: pam_unix(sshd:session): session closed for user core Jan 20 01:57:25.441287 systemd[1]: sshd@9-10.0.0.44:22-10.0.0.1:41756.service: Deactivated successfully. Jan 20 01:57:25.450469 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 01:57:25.460036 systemd-logind[1623]: Session 10 logged out. Waiting for processes to exit. Jan 20 01:57:25.465943 systemd-logind[1623]: Removed session 10. Jan 20 01:57:25.556810 kernel: audit: type=1104 audit(1768874245.426:627): pid=7400 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:57:25.426000 audit[7400]: CRED_DISP pid=7400 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:57:25.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.44:22-10.0.0.1:41756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:57:25.675021 containerd[1643]: 2026-01-20 01:57:15.448 [INFO][7085] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:57:25.675021 containerd[1643]: 2026-01-20 01:57:16.144 [INFO][7085] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6455dcb75d--z7fp4-eth0 whisker-6455dcb75d- calico-system 73120413-751d-4a6a-a82b-54ccc2e8bc99 1343 0 2026-01-20 01:52:40 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6455dcb75d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6455dcb75d-z7fp4 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calice5c1d73743 [] [] }} ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" Namespace="calico-system" Pod="whisker-6455dcb75d-z7fp4" WorkloadEndpoint="localhost-k8s-whisker--6455dcb75d--z7fp4-" Jan 20 01:57:25.675021 containerd[1643]: 2026-01-20 01:57:16.148 [INFO][7085] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" Namespace="calico-system" Pod="whisker-6455dcb75d-z7fp4" WorkloadEndpoint="localhost-k8s-whisker--6455dcb75d--z7fp4-eth0" Jan 20 01:57:25.675021 containerd[1643]: 2026-01-20 01:57:18.909 [INFO][7298] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" HandleID="k8s-pod-network.034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" Workload="localhost-k8s-whisker--6455dcb75d--z7fp4-eth0" Jan 20 01:57:25.692069 containerd[1643]: 2026-01-20 01:57:18.924 [INFO][7298] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" HandleID="k8s-pod-network.034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" Workload="localhost-k8s-whisker--6455dcb75d--z7fp4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a83a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6455dcb75d-z7fp4", "timestamp":"2026-01-20 01:57:18.909270169 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:57:25.692069 containerd[1643]: 2026-01-20 01:57:18.924 [INFO][7298] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:57:25.692069 containerd[1643]: 2026-01-20 01:57:20.544 [INFO][7298] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:57:25.692069 containerd[1643]: 2026-01-20 01:57:20.544 [INFO][7298] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 01:57:25.692069 containerd[1643]: 2026-01-20 01:57:20.939 [INFO][7298] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" host="localhost" Jan 20 01:57:25.692069 containerd[1643]: 2026-01-20 01:57:21.894 [INFO][7298] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 01:57:25.692069 containerd[1643]: 2026-01-20 01:57:22.360 [INFO][7298] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 01:57:25.692069 containerd[1643]: 2026-01-20 01:57:22.581 [INFO][7298] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 01:57:25.692069 containerd[1643]: 2026-01-20 01:57:22.634 [INFO][7298] ipam/ipam.go 163: The referenced block doesn't exist, trying to create it cidr=192.168.88.128/26 host="localhost" Jan 20 01:57:25.692069 containerd[1643]: 2026-01-20 01:57:22.980 [INFO][7298] ipam/ipam.go 170: Wrote affinity as pending cidr=192.168.88.128/26 host="localhost" Jan 20 01:57:25.692069 containerd[1643]: 2026-01-20 01:57:23.600 [INFO][7298] ipam/ipam.go 179: Attempting to claim the block cidr=192.168.88.128/26 host="localhost" Jan 20 01:57:25.695291 containerd[1643]: 2026-01-20 01:57:23.602 [INFO][7298] ipam/ipam_block_reader_writer.go 226: Attempting to create a new block affinityType="host" host="localhost" subnet=192.168.88.128/26 Jan 20 01:57:25.695291 containerd[1643]: 2026-01-20 01:57:24.117 [INFO][7298] ipam/ipam_block_reader_writer.go 231: The block already exists, getting it from data store affinityType="host" host="localhost" subnet=192.168.88.128/26 Jan 20 01:57:25.695291 containerd[1643]: 2026-01-20 01:57:24.271 [INFO][7298] ipam/ipam_block_reader_writer.go 247: Block is already claimed by this host, confirm the affinity affinityType="host" host="localhost" subnet=192.168.88.128/26 Jan 20 01:57:25.695291 containerd[1643]: 2026-01-20 01:57:24.272 [INFO][7298] ipam/ipam_block_reader_writer.go 283: Confirming affinity host="localhost" subnet=192.168.88.128/26 Jan 20 01:57:25.695291 containerd[1643]: 2026-01-20 01:57:24.330 [ERROR][7298] ipam/customresource.go 184: Error updating resource Key=BlockAffinity(localhost-192-168-88-128-26) Name="localhost-192-168-88-128-26" Resource="BlockAffinities" Value=&v3.BlockAffinity{TypeMeta:v1.TypeMeta{Kind:"BlockAffinity", APIVersion:"crd.projectcalico.org/v1"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-192-168-88-128-26", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"1830", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.BlockAffinitySpec{State:"confirmed", Node:"localhost", Type:"host", CIDR:"192.168.88.128/26", Deleted:"false"}} error=Operation cannot be fulfilled on blockaffinities.crd.projectcalico.org "localhost-192-168-88-128-26": the object has been modified; please apply your changes to the latest version and try again Jan 20 01:57:25.695291 containerd[1643]: 2026-01-20 01:57:24.433 [INFO][7298] ipam/ipam_block_reader_writer.go 292: Affinity is already confirmed host="localhost" subnet=192.168.88.128/26 Jan 20 01:57:25.695624 containerd[1643]: 2026-01-20 01:57:24.433 [INFO][7298] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" host="localhost" Jan 20 01:57:25.695624 containerd[1643]: 2026-01-20 01:57:24.491 [INFO][7298] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a Jan 20 01:57:25.695624 containerd[1643]: 2026-01-20 01:57:24.612 [INFO][7298] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" host="localhost" Jan 20 01:57:25.695624 containerd[1643]: 2026-01-20 01:57:24.748 [INFO][7298] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" host="localhost" Jan 20 01:57:25.695624 containerd[1643]: 2026-01-20 01:57:24.749 [INFO][7298] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" host="localhost" Jan 20 01:57:25.695624 containerd[1643]: 2026-01-20 01:57:24.749 [INFO][7298] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:57:25.695624 containerd[1643]: 2026-01-20 01:57:24.749 [INFO][7298] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" HandleID="k8s-pod-network.034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" Workload="localhost-k8s-whisker--6455dcb75d--z7fp4-eth0" Jan 20 01:57:25.696017 containerd[1643]: 2026-01-20 01:57:24.860 [INFO][7085] cni-plugin/k8s.go 418: Populated endpoint ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" Namespace="calico-system" Pod="whisker-6455dcb75d-z7fp4" WorkloadEndpoint="localhost-k8s-whisker--6455dcb75d--z7fp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6455dcb75d--z7fp4-eth0", GenerateName:"whisker-6455dcb75d-", Namespace:"calico-system", SelfLink:"", UID:"73120413-751d-4a6a-a82b-54ccc2e8bc99", ResourceVersion:"1343", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 52, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6455dcb75d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6455dcb75d-z7fp4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calice5c1d73743", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:57:25.696017 containerd[1643]: 2026-01-20 01:57:24.869 [INFO][7085] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" Namespace="calico-system" Pod="whisker-6455dcb75d-z7fp4" WorkloadEndpoint="localhost-k8s-whisker--6455dcb75d--z7fp4-eth0" Jan 20 01:57:25.701553 containerd[1643]: 2026-01-20 01:57:24.870 [INFO][7085] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calice5c1d73743 ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" Namespace="calico-system" Pod="whisker-6455dcb75d-z7fp4" WorkloadEndpoint="localhost-k8s-whisker--6455dcb75d--z7fp4-eth0" Jan 20 01:57:25.701553 containerd[1643]: 2026-01-20 01:57:25.351 [INFO][7085] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" Namespace="calico-system" Pod="whisker-6455dcb75d-z7fp4" WorkloadEndpoint="localhost-k8s-whisker--6455dcb75d--z7fp4-eth0" Jan 20 01:57:25.701845 containerd[1643]: 2026-01-20 01:57:25.352 [INFO][7085] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" Namespace="calico-system" Pod="whisker-6455dcb75d-z7fp4" WorkloadEndpoint="localhost-k8s-whisker--6455dcb75d--z7fp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6455dcb75d--z7fp4-eth0", GenerateName:"whisker-6455dcb75d-", Namespace:"calico-system", SelfLink:"", UID:"73120413-751d-4a6a-a82b-54ccc2e8bc99", ResourceVersion:"1343", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 52, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6455dcb75d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a", Pod:"whisker-6455dcb75d-z7fp4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calice5c1d73743", MAC:"6e:d5:e9:e7:cd:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:57:25.702013 containerd[1643]: 2026-01-20 01:57:25.578 [INFO][7085] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" Namespace="calico-system" Pod="whisker-6455dcb75d-z7fp4" WorkloadEndpoint="localhost-k8s-whisker--6455dcb75d--z7fp4-eth0" Jan 20 01:57:26.265905 containerd[1643]: 2026-01-20 01:57:16.403 [INFO][7241] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="19b0c7e50851b77fe19c9d2256cca626359b8a18a83ce770a36965ae5129eb14" Jan 20 01:57:26.265905 containerd[1643]: 2026-01-20 01:57:16.403 [INFO][7241] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="19b0c7e50851b77fe19c9d2256cca626359b8a18a83ce770a36965ae5129eb14" iface="eth0" netns="/var/run/netns/cni-a1efc00d-2311-e724-fb5a-d31091abcfdc" Jan 20 01:57:26.265905 containerd[1643]: 2026-01-20 01:57:16.404 [INFO][7241] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="19b0c7e50851b77fe19c9d2256cca626359b8a18a83ce770a36965ae5129eb14" iface="eth0" netns="/var/run/netns/cni-a1efc00d-2311-e724-fb5a-d31091abcfdc" Jan 20 01:57:26.265905 containerd[1643]: 2026-01-20 01:57:16.420 [INFO][7241] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="19b0c7e50851b77fe19c9d2256cca626359b8a18a83ce770a36965ae5129eb14" iface="eth0" netns="/var/run/netns/cni-a1efc00d-2311-e724-fb5a-d31091abcfdc" Jan 20 01:57:26.265905 containerd[1643]: 2026-01-20 01:57:16.425 [INFO][7241] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="19b0c7e50851b77fe19c9d2256cca626359b8a18a83ce770a36965ae5129eb14" Jan 20 01:57:26.265905 containerd[1643]: 2026-01-20 01:57:16.425 [INFO][7241] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19b0c7e50851b77fe19c9d2256cca626359b8a18a83ce770a36965ae5129eb14" Jan 20 01:57:26.265905 containerd[1643]: 2026-01-20 01:57:18.922 [INFO][7301] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="19b0c7e50851b77fe19c9d2256cca626359b8a18a83ce770a36965ae5129eb14" HandleID="k8s-pod-network.19b0c7e50851b77fe19c9d2256cca626359b8a18a83ce770a36965ae5129eb14" Workload="localhost-k8s-calico--kube--controllers--86d7bc7b4f--k5t2j-eth0" Jan 20 01:57:26.265905 containerd[1643]: 2026-01-20 01:57:18.937 [INFO][7301] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:57:26.265905 containerd[1643]: 2026-01-20 01:57:24.869 [INFO][7301] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:57:26.353948 containerd[1643]: 2026-01-20 01:57:25.305 [WARNING][7301] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="19b0c7e50851b77fe19c9d2256cca626359b8a18a83ce770a36965ae5129eb14" HandleID="k8s-pod-network.19b0c7e50851b77fe19c9d2256cca626359b8a18a83ce770a36965ae5129eb14" Workload="localhost-k8s-calico--kube--controllers--86d7bc7b4f--k5t2j-eth0" Jan 20 01:57:26.353948 containerd[1643]: 2026-01-20 01:57:25.305 [INFO][7301] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="19b0c7e50851b77fe19c9d2256cca626359b8a18a83ce770a36965ae5129eb14" HandleID="k8s-pod-network.19b0c7e50851b77fe19c9d2256cca626359b8a18a83ce770a36965ae5129eb14" Workload="localhost-k8s-calico--kube--controllers--86d7bc7b4f--k5t2j-eth0" Jan 20 01:57:26.353948 containerd[1643]: 2026-01-20 01:57:25.376 [INFO][7301] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:57:26.353948 containerd[1643]: 2026-01-20 01:57:26.106 [INFO][7241] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="19b0c7e50851b77fe19c9d2256cca626359b8a18a83ce770a36965ae5129eb14" Jan 20 01:57:26.308316 systemd[1]: run-netns-cni\x2da1efc00d\x2d2311\x2de724\x2dfb5a\x2dd31091abcfdc.mount: Deactivated successfully. Jan 20 01:57:26.359377 containerd[1643]: time="2026-01-20T01:57:26.358553426Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d7bc7b4f-k5t2j,Uid:68cbc571-4445-4166-912c-8fdfe252aae2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"19b0c7e50851b77fe19c9d2256cca626359b8a18a83ce770a36965ae5129eb14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:26.361397 kubelet[3123]: E0120 01:57:26.361330 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19b0c7e50851b77fe19c9d2256cca626359b8a18a83ce770a36965ae5129eb14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:26.362575 kubelet[3123]: E0120 01:57:26.362525 3123 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19b0c7e50851b77fe19c9d2256cca626359b8a18a83ce770a36965ae5129eb14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" Jan 20 01:57:26.362869 kubelet[3123]: E0120 01:57:26.362826 3123 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19b0c7e50851b77fe19c9d2256cca626359b8a18a83ce770a36965ae5129eb14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" Jan 20 01:57:26.363122 kubelet[3123]: E0120 01:57:26.363067 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86d7bc7b4f-k5t2j_calico-system(68cbc571-4445-4166-912c-8fdfe252aae2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86d7bc7b4f-k5t2j_calico-system(68cbc571-4445-4166-912c-8fdfe252aae2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19b0c7e50851b77fe19c9d2256cca626359b8a18a83ce770a36965ae5129eb14\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 01:57:26.605538 systemd-networkd[1534]: calice5c1d73743: Gained IPv6LL Jan 20 01:57:26.933137 systemd-networkd[1534]: cali0609d714965: Link UP Jan 20 01:57:27.055316 systemd-networkd[1534]: cali0609d714965: Gained carrier Jan 20 01:57:27.203151 containerd[1643]: time="2026-01-20T01:57:27.202890570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d7bc7b4f-k5t2j,Uid:68cbc571-4445-4166-912c-8fdfe252aae2,Namespace:calico-system,Attempt:0,}" Jan 20 01:57:27.402781 containerd[1643]: 2026-01-20 01:57:21.806 [INFO][7373] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:57:27.402781 containerd[1643]: 2026-01-20 01:57:22.801 [INFO][7373] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--74b798b596--r7ptx-eth0 calico-apiserver-74b798b596- calico-apiserver 03f653dd-0210-41e9-9d70-a3905826baa1 1779 0 2026-01-20 01:51:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74b798b596 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-74b798b596-r7ptx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0609d714965 [] [] }} ContainerID="181b3a96c3aa5da8383818d6b70dcc2190177b935aadc869120b1f462947a790" Namespace="calico-apiserver" Pod="calico-apiserver-74b798b596-r7ptx" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b798b596--r7ptx-" Jan 20 01:57:27.402781 containerd[1643]: 2026-01-20 01:57:22.806 [INFO][7373] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="181b3a96c3aa5da8383818d6b70dcc2190177b935aadc869120b1f462947a790" Namespace="calico-apiserver" Pod="calico-apiserver-74b798b596-r7ptx" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b798b596--r7ptx-eth0" Jan 20 01:57:27.402781 containerd[1643]: 2026-01-20 01:57:23.778 [INFO][7450] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="181b3a96c3aa5da8383818d6b70dcc2190177b935aadc869120b1f462947a790" HandleID="k8s-pod-network.181b3a96c3aa5da8383818d6b70dcc2190177b935aadc869120b1f462947a790" Workload="localhost-k8s-calico--apiserver--74b798b596--r7ptx-eth0" Jan 20 01:57:27.411183 containerd[1643]: 2026-01-20 01:57:23.782 [INFO][7450] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="181b3a96c3aa5da8383818d6b70dcc2190177b935aadc869120b1f462947a790" HandleID="k8s-pod-network.181b3a96c3aa5da8383818d6b70dcc2190177b935aadc869120b1f462947a790" Workload="localhost-k8s-calico--apiserver--74b798b596--r7ptx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f9220), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-74b798b596-r7ptx", "timestamp":"2026-01-20 01:57:23.778218798 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:57:27.411183 containerd[1643]: 2026-01-20 01:57:23.791 [INFO][7450] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:57:27.411183 containerd[1643]: 2026-01-20 01:57:25.380 [INFO][7450] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:57:27.411183 containerd[1643]: 2026-01-20 01:57:25.380 [INFO][7450] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 01:57:27.411183 containerd[1643]: 2026-01-20 01:57:25.575 [INFO][7450] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.181b3a96c3aa5da8383818d6b70dcc2190177b935aadc869120b1f462947a790" host="localhost" Jan 20 01:57:27.411183 containerd[1643]: 2026-01-20 01:57:25.828 [INFO][7450] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 01:57:27.411183 containerd[1643]: 2026-01-20 01:57:26.202 [INFO][7450] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 01:57:27.411183 containerd[1643]: 2026-01-20 01:57:26.283 [INFO][7450] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 01:57:27.411183 containerd[1643]: 2026-01-20 01:57:26.337 [INFO][7450] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 01:57:27.411183 containerd[1643]: 2026-01-20 01:57:26.337 [INFO][7450] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.181b3a96c3aa5da8383818d6b70dcc2190177b935aadc869120b1f462947a790" host="localhost" Jan 20 01:57:27.435149 containerd[1643]: 2026-01-20 01:57:26.398 [INFO][7450] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.181b3a96c3aa5da8383818d6b70dcc2190177b935aadc869120b1f462947a790 Jan 20 01:57:27.435149 containerd[1643]: 2026-01-20 01:57:26.458 [INFO][7450] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.181b3a96c3aa5da8383818d6b70dcc2190177b935aadc869120b1f462947a790" host="localhost" Jan 20 01:57:27.435149 containerd[1643]: 2026-01-20 01:57:26.643 [INFO][7450] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.181b3a96c3aa5da8383818d6b70dcc2190177b935aadc869120b1f462947a790" host="localhost" Jan 20 01:57:27.435149 containerd[1643]: 2026-01-20 01:57:26.643 [INFO][7450] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.181b3a96c3aa5da8383818d6b70dcc2190177b935aadc869120b1f462947a790" host="localhost" Jan 20 01:57:27.435149 containerd[1643]: 2026-01-20 01:57:26.643 [INFO][7450] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:57:27.435149 containerd[1643]: 2026-01-20 01:57:26.643 [INFO][7450] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="181b3a96c3aa5da8383818d6b70dcc2190177b935aadc869120b1f462947a790" HandleID="k8s-pod-network.181b3a96c3aa5da8383818d6b70dcc2190177b935aadc869120b1f462947a790" Workload="localhost-k8s-calico--apiserver--74b798b596--r7ptx-eth0" Jan 20 01:57:27.435452 containerd[1643]: 2026-01-20 01:57:26.826 [INFO][7373] cni-plugin/k8s.go 418: Populated endpoint ContainerID="181b3a96c3aa5da8383818d6b70dcc2190177b935aadc869120b1f462947a790" Namespace="calico-apiserver" Pod="calico-apiserver-74b798b596-r7ptx" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b798b596--r7ptx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74b798b596--r7ptx-eth0", GenerateName:"calico-apiserver-74b798b596-", Namespace:"calico-apiserver", SelfLink:"", UID:"03f653dd-0210-41e9-9d70-a3905826baa1", ResourceVersion:"1779", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 51, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74b798b596", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-74b798b596-r7ptx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0609d714965", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:57:27.435620 containerd[1643]: 2026-01-20 01:57:26.838 [INFO][7373] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="181b3a96c3aa5da8383818d6b70dcc2190177b935aadc869120b1f462947a790" Namespace="calico-apiserver" Pod="calico-apiserver-74b798b596-r7ptx" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b798b596--r7ptx-eth0" Jan 20 01:57:27.435620 containerd[1643]: 2026-01-20 01:57:26.866 [INFO][7373] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0609d714965 ContainerID="181b3a96c3aa5da8383818d6b70dcc2190177b935aadc869120b1f462947a790" Namespace="calico-apiserver" Pod="calico-apiserver-74b798b596-r7ptx" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b798b596--r7ptx-eth0" Jan 20 01:57:27.435620 containerd[1643]: 2026-01-20 01:57:27.062 [INFO][7373] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="181b3a96c3aa5da8383818d6b70dcc2190177b935aadc869120b1f462947a790" Namespace="calico-apiserver" Pod="calico-apiserver-74b798b596-r7ptx" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b798b596--r7ptx-eth0" Jan 20 01:57:27.435832 containerd[1643]: 2026-01-20 01:57:27.065 [INFO][7373] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="181b3a96c3aa5da8383818d6b70dcc2190177b935aadc869120b1f462947a790" Namespace="calico-apiserver" Pod="calico-apiserver-74b798b596-r7ptx" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b798b596--r7ptx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74b798b596--r7ptx-eth0", GenerateName:"calico-apiserver-74b798b596-", Namespace:"calico-apiserver", SelfLink:"", UID:"03f653dd-0210-41e9-9d70-a3905826baa1", ResourceVersion:"1779", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 51, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74b798b596", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"181b3a96c3aa5da8383818d6b70dcc2190177b935aadc869120b1f462947a790", Pod:"calico-apiserver-74b798b596-r7ptx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0609d714965", MAC:"da:5d:65:3d:51:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:57:27.436002 containerd[1643]: 2026-01-20 01:57:27.304 [INFO][7373] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="181b3a96c3aa5da8383818d6b70dcc2190177b935aadc869120b1f462947a790" Namespace="calico-apiserver" Pod="calico-apiserver-74b798b596-r7ptx" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b798b596--r7ptx-eth0" Jan 20 01:57:27.785541 containerd[1643]: time="2026-01-20T01:57:27.784972992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-szpvj,Uid:285811f9-e547-431f-a7b0-90e1226d2f4d,Namespace:calico-system,Attempt:0,}" Jan 20 01:57:28.047447 containerd[1643]: time="2026-01-20T01:57:28.047084935Z" level=info msg="connecting to shim 034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" address="unix:///run/containerd/s/8b80e074660115321099bad95945b4050212bed6cc7f9808dc153b28cb5821da" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:57:28.272812 systemd-networkd[1534]: cali0609d714965: Gained IPv6LL Jan 20 01:57:28.755248 systemd-networkd[1534]: cali692bc64ade9: Link UP Jan 20 01:57:28.755935 systemd-networkd[1534]: cali692bc64ade9: Gained carrier Jan 20 01:57:28.998838 containerd[1643]: 2026-01-20 01:57:22.327 [INFO][7387] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:57:28.998838 containerd[1643]: 2026-01-20 01:57:23.067 [INFO][7387] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--74b798b596--wbvft-eth0 calico-apiserver-74b798b596- calico-apiserver 589f656f-1e0a-4667-bc0d-42908aab3340 1341 0 2026-01-20 01:51:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74b798b596 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-74b798b596-wbvft eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali692bc64ade9 [] [] }} ContainerID="fb8262e56d58beecf0cdfc1592bac22004c60b9d161705a8d440db0638307487" Namespace="calico-apiserver" Pod="calico-apiserver-74b798b596-wbvft" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b798b596--wbvft-" Jan 20 01:57:28.998838 containerd[1643]: 2026-01-20 01:57:23.068 [INFO][7387] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fb8262e56d58beecf0cdfc1592bac22004c60b9d161705a8d440db0638307487" Namespace="calico-apiserver" Pod="calico-apiserver-74b798b596-wbvft" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b798b596--wbvft-eth0" Jan 20 01:57:28.998838 containerd[1643]: 2026-01-20 01:57:25.367 [INFO][7466] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb8262e56d58beecf0cdfc1592bac22004c60b9d161705a8d440db0638307487" HandleID="k8s-pod-network.fb8262e56d58beecf0cdfc1592bac22004c60b9d161705a8d440db0638307487" Workload="localhost-k8s-calico--apiserver--74b798b596--wbvft-eth0" Jan 20 01:57:29.010207 containerd[1643]: 2026-01-20 01:57:25.368 [INFO][7466] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fb8262e56d58beecf0cdfc1592bac22004c60b9d161705a8d440db0638307487" HandleID="k8s-pod-network.fb8262e56d58beecf0cdfc1592bac22004c60b9d161705a8d440db0638307487" Workload="localhost-k8s-calico--apiserver--74b798b596--wbvft-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000592730), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-74b798b596-wbvft", "timestamp":"2026-01-20 01:57:25.367984846 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:57:29.010207 containerd[1643]: 2026-01-20 01:57:25.368 [INFO][7466] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:57:29.010207 containerd[1643]: 2026-01-20 01:57:26.661 [INFO][7466] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:57:29.010207 containerd[1643]: 2026-01-20 01:57:26.662 [INFO][7466] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 01:57:29.010207 containerd[1643]: 2026-01-20 01:57:26.953 [INFO][7466] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fb8262e56d58beecf0cdfc1592bac22004c60b9d161705a8d440db0638307487" host="localhost" Jan 20 01:57:29.010207 containerd[1643]: 2026-01-20 01:57:27.083 [INFO][7466] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 01:57:29.010207 containerd[1643]: 2026-01-20 01:57:27.326 [INFO][7466] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 01:57:29.010207 containerd[1643]: 2026-01-20 01:57:27.374 [INFO][7466] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 01:57:29.010207 containerd[1643]: 2026-01-20 01:57:27.482 [INFO][7466] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 01:57:29.010207 containerd[1643]: 2026-01-20 01:57:27.482 [INFO][7466] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fb8262e56d58beecf0cdfc1592bac22004c60b9d161705a8d440db0638307487" host="localhost" Jan 20 01:57:29.010794 containerd[1643]: 2026-01-20 01:57:27.628 [INFO][7466] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fb8262e56d58beecf0cdfc1592bac22004c60b9d161705a8d440db0638307487 Jan 20 01:57:29.010794 containerd[1643]: 2026-01-20 01:57:28.038 [INFO][7466] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fb8262e56d58beecf0cdfc1592bac22004c60b9d161705a8d440db0638307487" host="localhost" Jan 20 01:57:29.010794 containerd[1643]: 2026-01-20 01:57:28.208 [INFO][7466] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.fb8262e56d58beecf0cdfc1592bac22004c60b9d161705a8d440db0638307487" host="localhost" Jan 20 01:57:29.010794 containerd[1643]: 2026-01-20 01:57:28.233 [INFO][7466] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.fb8262e56d58beecf0cdfc1592bac22004c60b9d161705a8d440db0638307487" host="localhost" Jan 20 01:57:29.010794 containerd[1643]: 2026-01-20 01:57:28.242 [INFO][7466] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:57:29.010794 containerd[1643]: 2026-01-20 01:57:28.249 [INFO][7466] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="fb8262e56d58beecf0cdfc1592bac22004c60b9d161705a8d440db0638307487" HandleID="k8s-pod-network.fb8262e56d58beecf0cdfc1592bac22004c60b9d161705a8d440db0638307487" Workload="localhost-k8s-calico--apiserver--74b798b596--wbvft-eth0" Jan 20 01:57:29.011041 containerd[1643]: 2026-01-20 01:57:28.570 [INFO][7387] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fb8262e56d58beecf0cdfc1592bac22004c60b9d161705a8d440db0638307487" Namespace="calico-apiserver" Pod="calico-apiserver-74b798b596-wbvft" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b798b596--wbvft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74b798b596--wbvft-eth0", GenerateName:"calico-apiserver-74b798b596-", Namespace:"calico-apiserver", SelfLink:"", UID:"589f656f-1e0a-4667-bc0d-42908aab3340", ResourceVersion:"1341", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 51, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74b798b596", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-74b798b596-wbvft", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali692bc64ade9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:57:29.011205 containerd[1643]: 2026-01-20 01:57:28.572 [INFO][7387] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="fb8262e56d58beecf0cdfc1592bac22004c60b9d161705a8d440db0638307487" Namespace="calico-apiserver" Pod="calico-apiserver-74b798b596-wbvft" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b798b596--wbvft-eth0" Jan 20 01:57:29.011205 containerd[1643]: 2026-01-20 01:57:28.578 [INFO][7387] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali692bc64ade9 ContainerID="fb8262e56d58beecf0cdfc1592bac22004c60b9d161705a8d440db0638307487" Namespace="calico-apiserver" Pod="calico-apiserver-74b798b596-wbvft" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b798b596--wbvft-eth0" Jan 20 01:57:29.011205 containerd[1643]: 2026-01-20 01:57:28.741 [INFO][7387] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb8262e56d58beecf0cdfc1592bac22004c60b9d161705a8d440db0638307487" Namespace="calico-apiserver" Pod="calico-apiserver-74b798b596-wbvft" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b798b596--wbvft-eth0" Jan 20 01:57:29.080743 containerd[1643]: 2026-01-20 01:57:28.742 [INFO][7387] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fb8262e56d58beecf0cdfc1592bac22004c60b9d161705a8d440db0638307487" Namespace="calico-apiserver" Pod="calico-apiserver-74b798b596-wbvft" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b798b596--wbvft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74b798b596--wbvft-eth0", GenerateName:"calico-apiserver-74b798b596-", Namespace:"calico-apiserver", SelfLink:"", UID:"589f656f-1e0a-4667-bc0d-42908aab3340", ResourceVersion:"1341", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 51, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74b798b596", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fb8262e56d58beecf0cdfc1592bac22004c60b9d161705a8d440db0638307487", Pod:"calico-apiserver-74b798b596-wbvft", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali692bc64ade9", MAC:"f2:92:bf:4f:34:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:57:29.081019 containerd[1643]: 2026-01-20 01:57:28.865 [INFO][7387] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fb8262e56d58beecf0cdfc1592bac22004c60b9d161705a8d440db0638307487" Namespace="calico-apiserver" Pod="calico-apiserver-74b798b596-wbvft" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b798b596--wbvft-eth0" Jan 20 01:57:29.603015 systemd[1]: Started cri-containerd-034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a.scope - libcontainer container 034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a. Jan 20 01:57:29.780158 containerd[1643]: time="2026-01-20T01:57:29.780087002Z" level=info msg="connecting to shim 181b3a96c3aa5da8383818d6b70dcc2190177b935aadc869120b1f462947a790" address="unix:///run/containerd/s/10d49922a63db68776733fc5697f5dfa8f90413099386a4fd383cd0925f7eaf4" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:57:30.074778 systemd-networkd[1534]: cali692bc64ade9: Gained IPv6LL Jan 20 01:57:30.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.44:22-10.0.0.1:55380 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:57:30.510651 systemd[1]: Started sshd@10-10.0.0.44:22-10.0.0.1:55380.service - OpenSSH per-connection server daemon (10.0.0.1:55380). Jan 20 01:57:30.531485 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 01:57:30.531608 kernel: audit: type=1130 audit(1768874250.510:629): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.44:22-10.0.0.1:55380 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:57:30.644000 audit: BPF prog-id=185 op=LOAD Jan 20 01:57:30.664964 kernel: audit: type=1334 audit(1768874250.644:630): prog-id=185 op=LOAD Jan 20 01:57:30.663000 audit: BPF prog-id=186 op=LOAD Jan 20 01:57:30.728127 kernel: audit: type=1334 audit(1768874250.663:631): prog-id=186 op=LOAD Jan 20 01:57:30.728272 kernel: audit: type=1300 audit(1768874250.663:631): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=7612 pid=7643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:30.663000 audit[7643]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=7612 pid=7643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:30.731145 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 01:57:30.777063 kubelet[3123]: E0120 01:57:30.776926 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:57:30.836594 kernel: audit: type=1327 audit(1768874250.663:631): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033346662643037656535323733626563646666396339306630306233 Jan 20 01:57:30.663000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033346662643037656535323733626563646666396339306630306233 Jan 20 01:57:31.008031 kernel: audit: type=1334 audit(1768874250.663:632): prog-id=186 op=UNLOAD Jan 20 01:57:31.008762 kernel: audit: type=1300 audit(1768874250.663:632): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=7612 pid=7643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:30.663000 audit: BPF prog-id=186 op=UNLOAD Jan 20 01:57:30.663000 audit[7643]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=7612 pid=7643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:30.663000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033346662643037656535323733626563646666396339306630306233 Jan 20 01:57:31.060835 kernel: audit: type=1327 audit(1768874250.663:632): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033346662643037656535323733626563646666396339306630306233 Jan 20 01:57:31.095131 kernel: audit: type=1334 audit(1768874250.669:633): prog-id=187 op=LOAD Jan 20 01:57:30.669000 audit: BPF prog-id=187 op=LOAD Jan 20 01:57:30.669000 audit[7643]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=7612 pid=7643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:31.171211 kernel: audit: type=1300 audit(1768874250.669:633): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=7612 pid=7643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:30.669000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033346662643037656535323733626563646666396339306630306233 Jan 20 01:57:30.669000 audit: BPF prog-id=188 op=LOAD Jan 20 01:57:30.669000 audit[7643]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=7612 pid=7643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:30.669000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033346662643037656535323733626563646666396339306630306233 Jan 20 01:57:30.669000 audit: BPF prog-id=188 op=UNLOAD Jan 20 01:57:30.669000 audit[7643]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=7612 pid=7643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:30.669000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033346662643037656535323733626563646666396339306630306233 Jan 20 01:57:30.669000 audit: BPF prog-id=187 op=UNLOAD Jan 20 01:57:30.669000 audit[7643]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=7612 pid=7643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:30.669000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033346662643037656535323733626563646666396339306630306233 Jan 20 01:57:30.669000 audit: BPF prog-id=189 op=LOAD Jan 20 01:57:30.669000 audit[7643]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=7612 pid=7643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:30.669000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033346662643037656535323733626563646666396339306630306233 Jan 20 01:57:31.590171 containerd[1643]: time="2026-01-20T01:57:31.584121863Z" level=info msg="connecting to shim fb8262e56d58beecf0cdfc1592bac22004c60b9d161705a8d440db0638307487" address="unix:///run/containerd/s/b3fb2a84851b75053d5f4f60f533e0f464b880a12c80a869aaba443dcc93c92e" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:57:31.601051 systemd-networkd[1534]: cali96de260113e: Link UP Jan 20 01:57:31.631270 systemd-networkd[1534]: cali96de260113e: Gained carrier Jan 20 01:57:31.755954 containerd[1643]: time="2026-01-20T01:57:31.749964502Z" level=error msg="get state for 034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" error="context deadline exceeded" Jan 20 01:57:31.755954 containerd[1643]: time="2026-01-20T01:57:31.750035077Z" level=warning msg="unknown status" status=0 Jan 20 01:57:31.902000 audit[7741]: USER_ACCT pid=7741 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:57:31.914590 sshd[7741]: Accepted publickey for core from 10.0.0.1 port 55380 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 01:57:31.921000 audit[7741]: CRED_ACQ pid=7741 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:57:31.927000 audit[7741]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffed7502bd0 a2=3 a3=0 items=0 ppid=1 pid=7741 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:31.927000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:57:31.936187 sshd-session[7741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:57:32.140324 systemd-logind[1623]: New session 11 of user core. Jan 20 01:57:32.160648 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 01:57:32.249000 audit[7741]: USER_START pid=7741 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:57:32.288000 audit[7795]: CRED_ACQ pid=7795 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:57:32.293939 containerd[1643]: 2026-01-20 01:57:24.377 [INFO][7408] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:57:32.293939 containerd[1643]: 2026-01-20 01:57:25.382 [INFO][7408] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--ft2sl-eth0 coredns-674b8bbfcf- kube-system dce8f61b-70a0-47ff-b7a3-9a49a15c7261 1778 0 2026-01-20 01:49:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-ft2sl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali96de260113e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="93237f0bd8b47aa75fe768c371bd0dae4b9a89687874789059ac2ac53a6d1fff" Namespace="kube-system" Pod="coredns-674b8bbfcf-ft2sl" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ft2sl-" Jan 20 01:57:32.293939 containerd[1643]: 2026-01-20 01:57:25.382 [INFO][7408] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="93237f0bd8b47aa75fe768c371bd0dae4b9a89687874789059ac2ac53a6d1fff" Namespace="kube-system" Pod="coredns-674b8bbfcf-ft2sl" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ft2sl-eth0" Jan 20 01:57:32.293939 containerd[1643]: 2026-01-20 01:57:26.343 [INFO][7512] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="93237f0bd8b47aa75fe768c371bd0dae4b9a89687874789059ac2ac53a6d1fff" HandleID="k8s-pod-network.93237f0bd8b47aa75fe768c371bd0dae4b9a89687874789059ac2ac53a6d1fff" Workload="localhost-k8s-coredns--674b8bbfcf--ft2sl-eth0" Jan 20 01:57:32.294290 containerd[1643]: 2026-01-20 01:57:26.344 [INFO][7512] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="93237f0bd8b47aa75fe768c371bd0dae4b9a89687874789059ac2ac53a6d1fff" HandleID="k8s-pod-network.93237f0bd8b47aa75fe768c371bd0dae4b9a89687874789059ac2ac53a6d1fff" Workload="localhost-k8s-coredns--674b8bbfcf--ft2sl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000303e90), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-ft2sl", "timestamp":"2026-01-20 01:57:26.343646343 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:57:32.294290 containerd[1643]: 2026-01-20 01:57:26.344 [INFO][7512] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:57:32.294290 containerd[1643]: 2026-01-20 01:57:28.242 [INFO][7512] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:57:32.294290 containerd[1643]: 2026-01-20 01:57:28.255 [INFO][7512] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 01:57:32.294290 containerd[1643]: 2026-01-20 01:57:28.777 [INFO][7512] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.93237f0bd8b47aa75fe768c371bd0dae4b9a89687874789059ac2ac53a6d1fff" host="localhost" Jan 20 01:57:32.294290 containerd[1643]: 2026-01-20 01:57:29.588 [INFO][7512] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 01:57:32.294290 containerd[1643]: 2026-01-20 01:57:30.095 [INFO][7512] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 01:57:32.294290 containerd[1643]: 2026-01-20 01:57:30.282 [INFO][7512] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 01:57:32.294290 containerd[1643]: 2026-01-20 01:57:30.342 [INFO][7512] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 01:57:32.294290 containerd[1643]: 2026-01-20 01:57:30.342 [INFO][7512] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.93237f0bd8b47aa75fe768c371bd0dae4b9a89687874789059ac2ac53a6d1fff" host="localhost" Jan 20 01:57:32.295119 containerd[1643]: 2026-01-20 01:57:30.390 [INFO][7512] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.93237f0bd8b47aa75fe768c371bd0dae4b9a89687874789059ac2ac53a6d1fff Jan 20 01:57:32.295119 containerd[1643]: 2026-01-20 01:57:30.597 [INFO][7512] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.93237f0bd8b47aa75fe768c371bd0dae4b9a89687874789059ac2ac53a6d1fff" host="localhost" Jan 20 01:57:32.295119 containerd[1643]: 2026-01-20 01:57:31.237 [INFO][7512] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.93237f0bd8b47aa75fe768c371bd0dae4b9a89687874789059ac2ac53a6d1fff" host="localhost" Jan 20 01:57:32.295119 containerd[1643]: 2026-01-20 01:57:31.237 [INFO][7512] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.93237f0bd8b47aa75fe768c371bd0dae4b9a89687874789059ac2ac53a6d1fff" host="localhost" Jan 20 01:57:32.295119 containerd[1643]: 2026-01-20 01:57:31.238 [INFO][7512] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:57:32.295119 containerd[1643]: 2026-01-20 01:57:31.238 [INFO][7512] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="93237f0bd8b47aa75fe768c371bd0dae4b9a89687874789059ac2ac53a6d1fff" HandleID="k8s-pod-network.93237f0bd8b47aa75fe768c371bd0dae4b9a89687874789059ac2ac53a6d1fff" Workload="localhost-k8s-coredns--674b8bbfcf--ft2sl-eth0" Jan 20 01:57:32.295365 containerd[1643]: 2026-01-20 01:57:31.355 [INFO][7408] cni-plugin/k8s.go 418: Populated endpoint ContainerID="93237f0bd8b47aa75fe768c371bd0dae4b9a89687874789059ac2ac53a6d1fff" Namespace="kube-system" Pod="coredns-674b8bbfcf-ft2sl" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ft2sl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--ft2sl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"dce8f61b-70a0-47ff-b7a3-9a49a15c7261", ResourceVersion:"1778", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 49, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-ft2sl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali96de260113e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:57:32.295617 containerd[1643]: 2026-01-20 01:57:31.378 [INFO][7408] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="93237f0bd8b47aa75fe768c371bd0dae4b9a89687874789059ac2ac53a6d1fff" Namespace="kube-system" Pod="coredns-674b8bbfcf-ft2sl" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ft2sl-eth0" Jan 20 01:57:32.295617 containerd[1643]: 2026-01-20 01:57:31.378 [INFO][7408] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali96de260113e ContainerID="93237f0bd8b47aa75fe768c371bd0dae4b9a89687874789059ac2ac53a6d1fff" Namespace="kube-system" Pod="coredns-674b8bbfcf-ft2sl" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ft2sl-eth0" Jan 20 01:57:32.295617 containerd[1643]: 2026-01-20 01:57:31.606 [INFO][7408] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="93237f0bd8b47aa75fe768c371bd0dae4b9a89687874789059ac2ac53a6d1fff" Namespace="kube-system" Pod="coredns-674b8bbfcf-ft2sl" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ft2sl-eth0" Jan 20 01:57:32.295889 containerd[1643]: 2026-01-20 01:57:31.607 [INFO][7408] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="93237f0bd8b47aa75fe768c371bd0dae4b9a89687874789059ac2ac53a6d1fff" Namespace="kube-system" Pod="coredns-674b8bbfcf-ft2sl" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ft2sl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--ft2sl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"dce8f61b-70a0-47ff-b7a3-9a49a15c7261", ResourceVersion:"1778", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 49, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"93237f0bd8b47aa75fe768c371bd0dae4b9a89687874789059ac2ac53a6d1fff", Pod:"coredns-674b8bbfcf-ft2sl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali96de260113e", MAC:"ea:e9:42:bb:ae:29", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:57:32.295889 containerd[1643]: 2026-01-20 01:57:31.954 [INFO][7408] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="93237f0bd8b47aa75fe768c371bd0dae4b9a89687874789059ac2ac53a6d1fff" Namespace="kube-system" Pod="coredns-674b8bbfcf-ft2sl" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ft2sl-eth0" Jan 20 01:57:32.367069 systemd[1]: Started cri-containerd-181b3a96c3aa5da8383818d6b70dcc2190177b935aadc869120b1f462947a790.scope - libcontainer container 181b3a96c3aa5da8383818d6b70dcc2190177b935aadc869120b1f462947a790. Jan 20 01:57:33.378031 systemd-networkd[1534]: cali96de260113e: Gained IPv6LL Jan 20 01:57:44.079963 containerd[1643]: time="2026-01-20T01:57:44.047961156Z" level=error msg="get state for 034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" error="context deadline exceeded" Jan 20 01:57:44.253833 containerd[1643]: time="2026-01-20T01:57:44.189799428Z" level=warning msg="unknown status" status=0 Jan 20 01:57:44.262065 containerd[1643]: time="2026-01-20T01:57:44.254626788Z" level=info msg="container event discarded" container=7a4d7c91d9a22ffeac7848f0f3751830772e2f1dfa780eebd5bb2ee392da3ed1 type=CONTAINER_CREATED_EVENT Jan 20 01:57:44.262065 containerd[1643]: time="2026-01-20T01:57:44.254867796Z" level=info msg="container event discarded" container=7a4d7c91d9a22ffeac7848f0f3751830772e2f1dfa780eebd5bb2ee392da3ed1 type=CONTAINER_STARTED_EVENT Jan 20 01:57:45.825790 kubelet[3123]: E0120 01:57:45.825394 3123 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.805s" Jan 20 01:57:46.898853 kubelet[3123]: E0120 01:57:46.898619 3123 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.064s" Jan 20 01:57:47.233450 containerd[1643]: time="2026-01-20T01:57:47.221943058Z" level=error msg="get state for 034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" error="context deadline exceeded" Jan 20 01:57:47.250022 containerd[1643]: time="2026-01-20T01:57:47.249868615Z" level=warning msg="unknown status" status=0 Jan 20 01:57:47.361788 kernel: kauditd_printk_skb: 20 callbacks suppressed Jan 20 01:57:47.645918 kernel: audit: type=1334 audit(1768874267.307:643): prog-id=190 op=LOAD Jan 20 01:57:47.646128 kernel: audit: type=1334 audit(1768874267.317:644): prog-id=191 op=LOAD Jan 20 01:57:47.659131 kernel: audit: type=1300 audit(1768874267.317:644): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=7708 pid=7757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:47.659901 kernel: audit: type=1327 audit(1768874267.317:644): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138316233613936633361613564613833383338313864366237306463 Jan 20 01:57:47.307000 audit: BPF prog-id=190 op=LOAD Jan 20 01:57:47.737844 kernel: audit: type=1334 audit(1768874267.317:645): prog-id=191 op=UNLOAD Jan 20 01:57:47.739028 kernel: audit: type=1300 audit(1768874267.317:645): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=7708 pid=7757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:47.831413 kernel: audit: type=1327 audit(1768874267.317:645): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138316233613936633361613564613833383338313864366237306463 Jan 20 01:57:47.317000 audit: BPF prog-id=191 op=LOAD Jan 20 01:57:47.317000 audit[7757]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=7708 pid=7757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:47.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138316233613936633361613564613833383338313864366237306463 Jan 20 01:57:47.317000 audit: BPF prog-id=191 op=UNLOAD Jan 20 01:57:47.317000 audit[7757]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=7708 pid=7757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:47.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138316233613936633361613564613833383338313864366237306463 Jan 20 01:57:47.943072 containerd[1643]: time="2026-01-20T01:57:47.766568492Z" level=info msg="container event discarded" container=df23b759f5b4da89e96c316d85404c732266ab47580a907a0212147d6a557de5 type=CONTAINER_CREATED_EVENT Jan 20 01:57:47.943072 containerd[1643]: time="2026-01-20T01:57:47.767033747Z" level=info msg="container event discarded" container=df23b759f5b4da89e96c316d85404c732266ab47580a907a0212147d6a557de5 type=CONTAINER_STARTED_EVENT Jan 20 01:57:47.943072 containerd[1643]: time="2026-01-20T01:57:47.767072612Z" level=info msg="container event discarded" container=df23b759f5b4da89e96c316d85404c732266ab47580a907a0212147d6a557de5 type=CONTAINER_STOPPED_EVENT Jan 20 01:57:47.967523 kernel: audit: type=1334 audit(1768874267.329:646): prog-id=192 op=LOAD Jan 20 01:57:47.329000 audit: BPF prog-id=192 op=LOAD Jan 20 01:57:47.329000 audit[7757]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=7708 pid=7757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:48.015849 systemd[1]: Started cri-containerd-fb8262e56d58beecf0cdfc1592bac22004c60b9d161705a8d440db0638307487.scope - libcontainer container fb8262e56d58beecf0cdfc1592bac22004c60b9d161705a8d440db0638307487. Jan 20 01:57:48.035304 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 01:57:48.046908 sshd[7795]: Connection closed by 10.0.0.1 port 55380 Jan 20 01:57:48.133065 kernel: audit: type=1300 audit(1768874267.329:646): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=7708 pid=7757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:48.248069 kernel: audit: type=1327 audit(1768874267.329:646): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138316233613936633361613564613833383338313864366237306463 Jan 20 01:57:47.329000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138316233613936633361613564613833383338313864366237306463 Jan 20 01:57:47.337000 audit: BPF prog-id=193 op=LOAD Jan 20 01:57:47.337000 audit[7757]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=7708 pid=7757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:47.337000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138316233613936633361613564613833383338313864366237306463 Jan 20 01:57:47.338000 audit: BPF prog-id=193 op=UNLOAD Jan 20 01:57:47.338000 audit[7757]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=7708 pid=7757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:47.338000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138316233613936633361613564613833383338313864366237306463 Jan 20 01:57:47.430000 audit: BPF prog-id=192 op=UNLOAD Jan 20 01:57:47.430000 audit[7757]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=7708 pid=7757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:47.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138316233613936633361613564613833383338313864366237306463 Jan 20 01:57:47.655000 audit: BPF prog-id=194 op=LOAD Jan 20 01:57:47.655000 audit[7757]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=7708 pid=7757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:47.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138316233613936633361613564613833383338313864366237306463 Jan 20 01:57:48.226000 audit[7741]: USER_END pid=7741 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:57:48.228000 audit[7741]: CRED_DISP pid=7741 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:57:48.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.44:22-10.0.0.1:55380 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:57:48.061911 sshd-session[7741]: pam_unix(sshd:session): session closed for user core Jan 20 01:57:48.336573 systemd[1]: sshd@10-10.0.0.44:22-10.0.0.1:55380.service: Deactivated successfully. Jan 20 01:57:48.406259 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 01:57:48.406874 systemd[1]: session-11.scope: Consumed 1.613s CPU time, 16.4M memory peak. Jan 20 01:57:48.424433 systemd-logind[1623]: Session 11 logged out. Waiting for processes to exit. Jan 20 01:57:48.440847 systemd-logind[1623]: Removed session 11. Jan 20 01:57:48.598456 containerd[1643]: time="2026-01-20T01:57:48.516185584Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 20 01:57:48.598456 containerd[1643]: time="2026-01-20T01:57:48.516328055Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Jan 20 01:57:48.598456 containerd[1643]: time="2026-01-20T01:57:48.516345168Z" level=error msg="ttrpc: received message on inactive stream" stream=7 Jan 20 01:57:49.072521 containerd[1643]: time="2026-01-20T01:57:49.054440503Z" level=info msg="connecting to shim 93237f0bd8b47aa75fe768c371bd0dae4b9a89687874789059ac2ac53a6d1fff" address="unix:///run/containerd/s/c932b87926a7c692e2f375ef791bc1af96d7cca5782350bc664334061648b0de" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:57:49.548435 systemd-networkd[1534]: cali37e92066da7: Link UP Jan 20 01:57:49.551302 systemd-networkd[1534]: cali37e92066da7: Gained carrier Jan 20 01:57:49.782357 containerd[1643]: time="2026-01-20T01:57:49.704780459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6455dcb75d-z7fp4,Uid:73120413-751d-4a6a-a82b-54ccc2e8bc99,Namespace:calico-system,Attempt:0,} returns sandbox id \"034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a\"" Jan 20 01:57:49.835498 containerd[1643]: time="2026-01-20T01:57:49.835452180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 01:57:50.248000 audit: BPF prog-id=195 op=LOAD Jan 20 01:57:50.263484 containerd[1643]: 2026-01-20 01:57:24.486 [INFO][7435] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:57:50.263484 containerd[1643]: 2026-01-20 01:57:25.121 [INFO][7435] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--x6f5h-eth0 csi-node-driver- calico-system eeb09d5e-8a63-4fca-910b-ea49fa1ecf05 1783 0 2026-01-20 01:52:14 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-x6f5h eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali37e92066da7 [] [] }} ContainerID="9924a9f6e0142e7d636b5e4dbbd5e00977df39e1f29b5745b9f36d60dc1694d2" Namespace="calico-system" Pod="csi-node-driver-x6f5h" WorkloadEndpoint="localhost-k8s-csi--node--driver--x6f5h-" Jan 20 01:57:50.263484 containerd[1643]: 2026-01-20 01:57:25.121 [INFO][7435] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9924a9f6e0142e7d636b5e4dbbd5e00977df39e1f29b5745b9f36d60dc1694d2" Namespace="calico-system" Pod="csi-node-driver-x6f5h" WorkloadEndpoint="localhost-k8s-csi--node--driver--x6f5h-eth0" Jan 20 01:57:50.263484 containerd[1643]: 2026-01-20 01:57:26.428 [INFO][7494] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9924a9f6e0142e7d636b5e4dbbd5e00977df39e1f29b5745b9f36d60dc1694d2" HandleID="k8s-pod-network.9924a9f6e0142e7d636b5e4dbbd5e00977df39e1f29b5745b9f36d60dc1694d2" Workload="localhost-k8s-csi--node--driver--x6f5h-eth0" Jan 20 01:57:50.263484 containerd[1643]: 2026-01-20 01:57:26.429 [INFO][7494] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9924a9f6e0142e7d636b5e4dbbd5e00977df39e1f29b5745b9f36d60dc1694d2" HandleID="k8s-pod-network.9924a9f6e0142e7d636b5e4dbbd5e00977df39e1f29b5745b9f36d60dc1694d2" Workload="localhost-k8s-csi--node--driver--x6f5h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a5e80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-x6f5h", "timestamp":"2026-01-20 01:57:26.428909355 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:57:50.263484 containerd[1643]: 2026-01-20 01:57:26.429 [INFO][7494] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:57:50.263484 containerd[1643]: 2026-01-20 01:57:31.325 [INFO][7494] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:57:50.263484 containerd[1643]: 2026-01-20 01:57:31.325 [INFO][7494] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 01:57:50.263484 containerd[1643]: 2026-01-20 01:57:32.299 [INFO][7494] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9924a9f6e0142e7d636b5e4dbbd5e00977df39e1f29b5745b9f36d60dc1694d2" host="localhost" Jan 20 01:57:50.263484 containerd[1643]: 2026-01-20 01:57:32.546 [INFO][7494] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 01:57:50.263484 containerd[1643]: 2026-01-20 01:57:45.561 [INFO][7494] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 01:57:50.263484 containerd[1643]: 2026-01-20 01:57:45.665 [INFO][7494] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 01:57:50.263484 containerd[1643]: 2026-01-20 01:57:48.506 [INFO][7494] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 01:57:50.263484 containerd[1643]: 2026-01-20 01:57:48.507 [INFO][7494] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9924a9f6e0142e7d636b5e4dbbd5e00977df39e1f29b5745b9f36d60dc1694d2" host="localhost" Jan 20 01:57:50.263484 containerd[1643]: 2026-01-20 01:57:48.552 [INFO][7494] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9924a9f6e0142e7d636b5e4dbbd5e00977df39e1f29b5745b9f36d60dc1694d2 Jan 20 01:57:50.263484 containerd[1643]: 2026-01-20 01:57:48.867 [INFO][7494] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9924a9f6e0142e7d636b5e4dbbd5e00977df39e1f29b5745b9f36d60dc1694d2" host="localhost" Jan 20 01:57:50.263484 containerd[1643]: 2026-01-20 01:57:49.000 [INFO][7494] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.9924a9f6e0142e7d636b5e4dbbd5e00977df39e1f29b5745b9f36d60dc1694d2" host="localhost" Jan 20 01:57:50.263484 containerd[1643]: 2026-01-20 01:57:49.000 [INFO][7494] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.9924a9f6e0142e7d636b5e4dbbd5e00977df39e1f29b5745b9f36d60dc1694d2" host="localhost" Jan 20 01:57:50.263484 containerd[1643]: 2026-01-20 01:57:49.001 [INFO][7494] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:57:50.263484 containerd[1643]: 2026-01-20 01:57:49.001 [INFO][7494] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="9924a9f6e0142e7d636b5e4dbbd5e00977df39e1f29b5745b9f36d60dc1694d2" HandleID="k8s-pod-network.9924a9f6e0142e7d636b5e4dbbd5e00977df39e1f29b5745b9f36d60dc1694d2" Workload="localhost-k8s-csi--node--driver--x6f5h-eth0" Jan 20 01:57:50.266065 containerd[1643]: 2026-01-20 01:57:49.266 [INFO][7435] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9924a9f6e0142e7d636b5e4dbbd5e00977df39e1f29b5745b9f36d60dc1694d2" Namespace="calico-system" Pod="csi-node-driver-x6f5h" WorkloadEndpoint="localhost-k8s-csi--node--driver--x6f5h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--x6f5h-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eeb09d5e-8a63-4fca-910b-ea49fa1ecf05", ResourceVersion:"1783", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 52, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-x6f5h", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali37e92066da7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:57:50.266065 containerd[1643]: 2026-01-20 01:57:49.266 [INFO][7435] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="9924a9f6e0142e7d636b5e4dbbd5e00977df39e1f29b5745b9f36d60dc1694d2" Namespace="calico-system" Pod="csi-node-driver-x6f5h" WorkloadEndpoint="localhost-k8s-csi--node--driver--x6f5h-eth0" Jan 20 01:57:50.266065 containerd[1643]: 2026-01-20 01:57:49.266 [INFO][7435] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali37e92066da7 ContainerID="9924a9f6e0142e7d636b5e4dbbd5e00977df39e1f29b5745b9f36d60dc1694d2" Namespace="calico-system" Pod="csi-node-driver-x6f5h" WorkloadEndpoint="localhost-k8s-csi--node--driver--x6f5h-eth0" Jan 20 01:57:50.266065 containerd[1643]: 2026-01-20 01:57:49.538 [INFO][7435] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9924a9f6e0142e7d636b5e4dbbd5e00977df39e1f29b5745b9f36d60dc1694d2" Namespace="calico-system" Pod="csi-node-driver-x6f5h" WorkloadEndpoint="localhost-k8s-csi--node--driver--x6f5h-eth0" Jan 20 01:57:50.266065 containerd[1643]: 2026-01-20 01:57:49.725 [INFO][7435] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9924a9f6e0142e7d636b5e4dbbd5e00977df39e1f29b5745b9f36d60dc1694d2" Namespace="calico-system" Pod="csi-node-driver-x6f5h" WorkloadEndpoint="localhost-k8s-csi--node--driver--x6f5h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--x6f5h-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eeb09d5e-8a63-4fca-910b-ea49fa1ecf05", ResourceVersion:"1783", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 52, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9924a9f6e0142e7d636b5e4dbbd5e00977df39e1f29b5745b9f36d60dc1694d2", Pod:"csi-node-driver-x6f5h", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali37e92066da7", MAC:"fa:ab:20:2f:a2:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:57:50.266065 containerd[1643]: 2026-01-20 01:57:50.154 [INFO][7435] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9924a9f6e0142e7d636b5e4dbbd5e00977df39e1f29b5745b9f36d60dc1694d2" Namespace="calico-system" Pod="csi-node-driver-x6f5h" WorkloadEndpoint="localhost-k8s-csi--node--driver--x6f5h-eth0" Jan 20 01:57:50.334279 containerd[1643]: time="2026-01-20T01:57:50.333975462Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:57:50.360013 containerd[1643]: time="2026-01-20T01:57:50.359936867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 01:57:50.360365 containerd[1643]: time="2026-01-20T01:57:50.360256957Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 01:57:50.367932 kubelet[3123]: E0120 01:57:50.363991 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:57:50.367932 kubelet[3123]: E0120 01:57:50.364077 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:57:50.368597 kubelet[3123]: E0120 01:57:50.367839 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:424ac945776d4646865b6465d767c112,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4fd7k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6455dcb75d-z7fp4_calico-system(73120413-751d-4a6a-a82b-54ccc2e8bc99): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 01:57:50.396195 containerd[1643]: time="2026-01-20T01:57:50.396145758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 01:57:50.416000 audit: BPF prog-id=196 op=LOAD Jan 20 01:57:50.416000 audit[7793]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=7749 pid=7793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:50.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662383236326535366435386265656366306364666331353932626163 Jan 20 01:57:50.416000 audit: BPF prog-id=196 op=UNLOAD Jan 20 01:57:50.416000 audit[7793]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=7749 pid=7793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:50.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662383236326535366435386265656366306364666331353932626163 Jan 20 01:57:50.420000 audit: BPF prog-id=197 op=LOAD Jan 20 01:57:50.420000 audit[7793]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=7749 pid=7793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:50.420000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662383236326535366435386265656366306364666331353932626163 Jan 20 01:57:50.425000 audit: BPF prog-id=198 op=LOAD Jan 20 01:57:50.425000 audit[7793]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=7749 pid=7793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:50.425000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662383236326535366435386265656366306364666331353932626163 Jan 20 01:57:50.425000 audit: BPF prog-id=198 op=UNLOAD Jan 20 01:57:50.425000 audit[7793]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=7749 pid=7793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:50.425000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662383236326535366435386265656366306364666331353932626163 Jan 20 01:57:50.446000 audit: BPF prog-id=197 op=UNLOAD Jan 20 01:57:50.446000 audit[7793]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=7749 pid=7793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:50.446000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662383236326535366435386265656366306364666331353932626163 Jan 20 01:57:50.465000 audit: BPF prog-id=199 op=LOAD Jan 20 01:57:50.465000 audit[7793]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=7749 pid=7793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:50.465000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662383236326535366435386265656366306364666331353932626163 Jan 20 01:57:50.521923 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 01:57:50.716919 systemd-networkd[1534]: cali37e92066da7: Gained IPv6LL Jan 20 01:57:50.751643 containerd[1643]: time="2026-01-20T01:57:50.748190754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-r7ptx,Uid:03f653dd-0210-41e9-9d70-a3905826baa1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"181b3a96c3aa5da8383818d6b70dcc2190177b935aadc869120b1f462947a790\"" Jan 20 01:57:50.897283 systemd[1]: Started cri-containerd-93237f0bd8b47aa75fe768c371bd0dae4b9a89687874789059ac2ac53a6d1fff.scope - libcontainer container 93237f0bd8b47aa75fe768c371bd0dae4b9a89687874789059ac2ac53a6d1fff. Jan 20 01:57:51.120143 containerd[1643]: time="2026-01-20T01:57:51.119845611Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:57:51.586472 containerd[1643]: time="2026-01-20T01:57:51.586324025Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 01:57:51.599092 containerd[1643]: time="2026-01-20T01:57:51.589592213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 01:57:51.599990 kubelet[3123]: E0120 01:57:51.599941 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:57:51.603118 kubelet[3123]: E0120 01:57:51.603073 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:57:51.605459 kubelet[3123]: E0120 01:57:51.605328 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4fd7k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6455dcb75d-z7fp4_calico-system(73120413-751d-4a6a-a82b-54ccc2e8bc99): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 01:57:51.606238 containerd[1643]: time="2026-01-20T01:57:51.606201525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:57:51.640800 kubelet[3123]: E0120 01:57:51.632957 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6455dcb75d-z7fp4" podUID="73120413-751d-4a6a-a82b-54ccc2e8bc99" Jan 20 01:57:51.924990 systemd-networkd[1534]: calibff2feb3b7c: Link UP Jan 20 01:57:51.928279 systemd-networkd[1534]: calibff2feb3b7c: Gained carrier Jan 20 01:57:52.231330 containerd[1643]: time="2026-01-20T01:57:52.230980027Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:57:52.301800 containerd[1643]: time="2026-01-20T01:57:52.293843910Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:57:52.301800 containerd[1643]: time="2026-01-20T01:57:52.294033893Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 01:57:52.302078 kubelet[3123]: E0120 01:57:52.299440 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:57:52.302078 kubelet[3123]: E0120 01:57:52.299518 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:57:52.302078 kubelet[3123]: E0120 01:57:52.299809 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvpjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74b798b596-r7ptx_calico-apiserver(03f653dd-0210-41e9-9d70-a3905826baa1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:57:52.343280 kubelet[3123]: E0120 01:57:52.339097 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 01:57:52.631776 containerd[1643]: 2026-01-20 01:57:23.445 [INFO][7409] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:57:52.631776 containerd[1643]: 2026-01-20 01:57:24.161 [INFO][7409] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--hknvv-eth0 coredns-674b8bbfcf- kube-system 05750e4a-a6e9-4631-9a1f-786fc076da7e 1784 0 2026-01-20 01:49:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-hknvv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibff2feb3b7c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="42d81007ed907ed479e5212fabce9ae0011295ec42aa72306d447fb8bf1ece3f" Namespace="kube-system" Pod="coredns-674b8bbfcf-hknvv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hknvv-" Jan 20 01:57:52.631776 containerd[1643]: 2026-01-20 01:57:24.194 [INFO][7409] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="42d81007ed907ed479e5212fabce9ae0011295ec42aa72306d447fb8bf1ece3f" Namespace="kube-system" Pod="coredns-674b8bbfcf-hknvv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hknvv-eth0" Jan 20 01:57:52.631776 containerd[1643]: 2026-01-20 01:57:26.641 [INFO][7478] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="42d81007ed907ed479e5212fabce9ae0011295ec42aa72306d447fb8bf1ece3f" HandleID="k8s-pod-network.42d81007ed907ed479e5212fabce9ae0011295ec42aa72306d447fb8bf1ece3f" Workload="localhost-k8s-coredns--674b8bbfcf--hknvv-eth0" Jan 20 01:57:52.631776 containerd[1643]: 2026-01-20 01:57:26.641 [INFO][7478] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="42d81007ed907ed479e5212fabce9ae0011295ec42aa72306d447fb8bf1ece3f" HandleID="k8s-pod-network.42d81007ed907ed479e5212fabce9ae0011295ec42aa72306d447fb8bf1ece3f" Workload="localhost-k8s-coredns--674b8bbfcf--hknvv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00013a610), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-hknvv", "timestamp":"2026-01-20 01:57:26.641305062 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:57:52.631776 containerd[1643]: 2026-01-20 01:57:26.642 [INFO][7478] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:57:52.631776 containerd[1643]: 2026-01-20 01:57:49.001 [INFO][7478] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:57:52.631776 containerd[1643]: 2026-01-20 01:57:49.002 [INFO][7478] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 01:57:52.631776 containerd[1643]: 2026-01-20 01:57:49.284 [INFO][7478] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.42d81007ed907ed479e5212fabce9ae0011295ec42aa72306d447fb8bf1ece3f" host="localhost" Jan 20 01:57:52.631776 containerd[1643]: 2026-01-20 01:57:49.668 [INFO][7478] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 01:57:52.631776 containerd[1643]: 2026-01-20 01:57:50.245 [INFO][7478] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 01:57:52.631776 containerd[1643]: 2026-01-20 01:57:50.418 [INFO][7478] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 01:57:52.631776 containerd[1643]: 2026-01-20 01:57:50.532 [INFO][7478] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 01:57:52.631776 containerd[1643]: 2026-01-20 01:57:50.532 [INFO][7478] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.42d81007ed907ed479e5212fabce9ae0011295ec42aa72306d447fb8bf1ece3f" host="localhost" Jan 20 01:57:52.631776 containerd[1643]: 2026-01-20 01:57:50.737 [INFO][7478] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.42d81007ed907ed479e5212fabce9ae0011295ec42aa72306d447fb8bf1ece3f Jan 20 01:57:52.631776 containerd[1643]: 2026-01-20 01:57:50.931 [INFO][7478] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.42d81007ed907ed479e5212fabce9ae0011295ec42aa72306d447fb8bf1ece3f" host="localhost" Jan 20 01:57:52.631776 containerd[1643]: 2026-01-20 01:57:51.112 [INFO][7478] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.42d81007ed907ed479e5212fabce9ae0011295ec42aa72306d447fb8bf1ece3f" host="localhost" Jan 20 01:57:52.631776 containerd[1643]: 2026-01-20 01:57:51.112 [INFO][7478] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.42d81007ed907ed479e5212fabce9ae0011295ec42aa72306d447fb8bf1ece3f" host="localhost" Jan 20 01:57:52.631776 containerd[1643]: 2026-01-20 01:57:51.112 [INFO][7478] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:57:52.631776 containerd[1643]: 2026-01-20 01:57:51.112 [INFO][7478] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="42d81007ed907ed479e5212fabce9ae0011295ec42aa72306d447fb8bf1ece3f" HandleID="k8s-pod-network.42d81007ed907ed479e5212fabce9ae0011295ec42aa72306d447fb8bf1ece3f" Workload="localhost-k8s-coredns--674b8bbfcf--hknvv-eth0" Jan 20 01:57:52.633658 containerd[1643]: 2026-01-20 01:57:51.179 [INFO][7409] cni-plugin/k8s.go 418: Populated endpoint ContainerID="42d81007ed907ed479e5212fabce9ae0011295ec42aa72306d447fb8bf1ece3f" Namespace="kube-system" Pod="coredns-674b8bbfcf-hknvv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hknvv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--hknvv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"05750e4a-a6e9-4631-9a1f-786fc076da7e", ResourceVersion:"1784", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 49, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-hknvv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibff2feb3b7c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:57:52.633658 containerd[1643]: 2026-01-20 01:57:51.186 [INFO][7409] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="42d81007ed907ed479e5212fabce9ae0011295ec42aa72306d447fb8bf1ece3f" Namespace="kube-system" Pod="coredns-674b8bbfcf-hknvv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hknvv-eth0" Jan 20 01:57:52.633658 containerd[1643]: 2026-01-20 01:57:51.190 [INFO][7409] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibff2feb3b7c ContainerID="42d81007ed907ed479e5212fabce9ae0011295ec42aa72306d447fb8bf1ece3f" Namespace="kube-system" Pod="coredns-674b8bbfcf-hknvv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hknvv-eth0" Jan 20 01:57:52.633658 containerd[1643]: 2026-01-20 01:57:52.061 [INFO][7409] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="42d81007ed907ed479e5212fabce9ae0011295ec42aa72306d447fb8bf1ece3f" Namespace="kube-system" Pod="coredns-674b8bbfcf-hknvv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hknvv-eth0" Jan 20 01:57:52.633658 containerd[1643]: 2026-01-20 01:57:52.083 [INFO][7409] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="42d81007ed907ed479e5212fabce9ae0011295ec42aa72306d447fb8bf1ece3f" Namespace="kube-system" Pod="coredns-674b8bbfcf-hknvv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hknvv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--hknvv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"05750e4a-a6e9-4631-9a1f-786fc076da7e", ResourceVersion:"1784", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 49, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"42d81007ed907ed479e5212fabce9ae0011295ec42aa72306d447fb8bf1ece3f", Pod:"coredns-674b8bbfcf-hknvv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibff2feb3b7c", MAC:"82:2a:95:ae:01:3b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:57:52.633658 containerd[1643]: 2026-01-20 01:57:52.489 [INFO][7409] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="42d81007ed907ed479e5212fabce9ae0011295ec42aa72306d447fb8bf1ece3f" Namespace="kube-system" Pod="coredns-674b8bbfcf-hknvv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hknvv-eth0" Jan 20 01:57:52.643000 audit: BPF prog-id=200 op=LOAD Jan 20 01:57:52.664834 kernel: kauditd_printk_skb: 37 callbacks suppressed Jan 20 01:57:52.664976 kernel: audit: type=1334 audit(1768874272.643:662): prog-id=200 op=LOAD Jan 20 01:57:52.717807 kernel: audit: type=1334 audit(1768874272.691:663): prog-id=201 op=LOAD Jan 20 01:57:52.691000 audit: BPF prog-id=201 op=LOAD Jan 20 01:57:52.718025 kubelet[3123]: E0120 01:57:52.710843 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 01:57:52.691000 audit[7906]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000112238 a2=98 a3=0 items=0 ppid=7865 pid=7906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:52.767869 containerd[1643]: time="2026-01-20T01:57:52.758398036Z" level=info msg="StopPodSandbox for \"034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a\"" Jan 20 01:57:52.847901 kernel: audit: type=1300 audit(1768874272.691:663): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000112238 a2=98 a3=0 items=0 ppid=7865 pid=7906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:52.691000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933323337663062643862343761613735666537363863333731626430 Jan 20 01:57:53.014884 kernel: audit: type=1327 audit(1768874272.691:663): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933323337663062643862343761613735666537363863333731626430 Jan 20 01:57:53.015032 kernel: audit: type=1334 audit(1768874272.719:664): prog-id=201 op=UNLOAD Jan 20 01:57:52.719000 audit: BPF prog-id=201 op=UNLOAD Jan 20 01:57:53.152040 kernel: audit: type=1300 audit(1768874272.719:664): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=7865 pid=7906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:53.225303 kernel: audit: type=1327 audit(1768874272.719:664): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933323337663062643862343761613735666537363863333731626430 Jan 20 01:57:53.225531 kernel: audit: type=1334 audit(1768874272.719:665): prog-id=202 op=LOAD Jan 20 01:57:53.225561 kernel: audit: type=1300 audit(1768874272.719:665): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000112488 a2=98 a3=0 items=0 ppid=7865 pid=7906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:52.719000 audit[7906]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=7865 pid=7906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:52.719000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933323337663062643862343761613735666537363863333731626430 Jan 20 01:57:52.719000 audit: BPF prog-id=202 op=LOAD Jan 20 01:57:52.719000 audit[7906]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000112488 a2=98 a3=0 items=0 ppid=7865 pid=7906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:53.309984 containerd[1643]: time="2026-01-20T01:57:53.028596464Z" level=info msg="connecting to shim 9924a9f6e0142e7d636b5e4dbbd5e00977df39e1f29b5745b9f36d60dc1694d2" address="unix:///run/containerd/s/00d83a751bccc9d2746e3b3544d0c1764c723cc3bbd042b2996fc981ea6c297d" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:57:53.374868 kernel: audit: type=1327 audit(1768874272.719:665): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933323337663062643862343761613735666537363863333731626430 Jan 20 01:57:52.719000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933323337663062643862343761613735666537363863333731626430 Jan 20 01:57:52.719000 audit: BPF prog-id=203 op=LOAD Jan 20 01:57:52.719000 audit[7906]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000112218 a2=98 a3=0 items=0 ppid=7865 pid=7906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:52.719000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933323337663062643862343761613735666537363863333731626430 Jan 20 01:57:52.736000 audit: BPF prog-id=203 op=UNLOAD Jan 20 01:57:52.736000 audit[7906]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=7865 pid=7906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:52.736000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933323337663062643862343761613735666537363863333731626430 Jan 20 01:57:52.736000 audit: BPF prog-id=202 op=UNLOAD Jan 20 01:57:52.736000 audit[7906]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=7865 pid=7906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:52.736000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933323337663062643862343761613735666537363863333731626430 Jan 20 01:57:52.736000 audit: BPF prog-id=204 op=LOAD Jan 20 01:57:52.736000 audit[7906]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001126e8 a2=98 a3=0 items=0 ppid=7865 pid=7906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:52.736000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933323337663062643862343761613735666537363863333731626430 Jan 20 01:57:53.032008 systemd-networkd[1534]: calibff2feb3b7c: Gained IPv6LL Jan 20 01:57:53.466272 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 01:57:53.552430 systemd[1]: Started sshd@11-10.0.0.44:22-10.0.0.1:54432.service - OpenSSH per-connection server daemon (10.0.0.1:54432). Jan 20 01:57:53.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.44:22-10.0.0.1:54432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:57:53.656603 containerd[1643]: time="2026-01-20T01:57:53.642523487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b798b596-wbvft,Uid:589f656f-1e0a-4667-bc0d-42908aab3340,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"fb8262e56d58beecf0cdfc1592bac22004c60b9d161705a8d440db0638307487\"" Jan 20 01:57:53.794348 containerd[1643]: time="2026-01-20T01:57:53.783896484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:57:53.896146 containerd[1643]: time="2026-01-20T01:57:53.861412730Z" level=error msg="get state for 93237f0bd8b47aa75fe768c371bd0dae4b9a89687874789059ac2ac53a6d1fff" error="context deadline exceeded" Jan 20 01:57:53.896146 containerd[1643]: time="2026-01-20T01:57:53.865860057Z" level=warning msg="unknown status" status=0 Jan 20 01:57:54.649001 containerd[1643]: time="2026-01-20T01:57:54.622799169Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 20 01:57:54.757904 containerd[1643]: time="2026-01-20T01:57:54.757005919Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:57:54.856000 audit[7988]: USER_ACCT pid=7988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:57:54.892000 audit[7988]: CRED_ACQ pid=7988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:57:54.892000 audit[7988]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc74fe370 a2=3 a3=0 items=0 ppid=1 pid=7988 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:54.892000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:57:54.905923 sshd-session[7988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:57:54.938509 sshd[7988]: Accepted publickey for core from 10.0.0.1 port 54432 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 01:57:55.099071 kubelet[3123]: E0120 01:57:55.058982 3123 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.286s" Jan 20 01:57:55.212938 systemd-logind[1623]: New session 12 of user core. Jan 20 01:57:55.235010 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 01:57:55.272017 containerd[1643]: time="2026-01-20T01:57:55.249454289Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:57:55.272017 containerd[1643]: time="2026-01-20T01:57:55.267525975Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 01:57:55.272249 kubelet[3123]: E0120 01:57:55.270158 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:57:55.272249 kubelet[3123]: E0120 01:57:55.270228 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:57:55.272249 kubelet[3123]: E0120 01:57:55.270661 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gbt2k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74b798b596-wbvft_calico-apiserver(589f656f-1e0a-4667-bc0d-42908aab3340): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:57:55.370527 kubelet[3123]: E0120 01:57:55.286246 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 01:57:55.424000 audit[7988]: USER_START pid=7988 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:57:55.698000 audit[8022]: CRED_ACQ pid=8022 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:57:55.859084 systemd[1]: Started cri-containerd-9924a9f6e0142e7d636b5e4dbbd5e00977df39e1f29b5745b9f36d60dc1694d2.scope - libcontainer container 9924a9f6e0142e7d636b5e4dbbd5e00977df39e1f29b5745b9f36d60dc1694d2. Jan 20 01:57:56.006000 audit[8017]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=8017 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:57:56.006000 audit[8017]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd84cfa3f0 a2=0 a3=7ffd84cfa3dc items=0 ppid=3237 pid=8017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:56.006000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:57:56.052000 audit[8017]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=8017 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:57:56.052000 audit[8017]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd84cfa3f0 a2=0 a3=0 items=0 ppid=3237 pid=8017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:56.052000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:57:56.086540 kubelet[3123]: E0120 01:57:56.043301 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 01:57:56.881860 systemd[1]: cri-containerd-034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a.scope: Deactivated successfully. Jan 20 01:57:56.920000 audit: BPF prog-id=185 op=UNLOAD Jan 20 01:57:56.920000 audit: BPF prog-id=189 op=UNLOAD Jan 20 01:57:56.998173 systemd-networkd[1534]: cali3b1aeccb903: Link UP Jan 20 01:57:57.080135 systemd-networkd[1534]: cali3b1aeccb903: Gained carrier Jan 20 01:57:57.328230 containerd[1643]: time="2026-01-20T01:57:57.327175872Z" level=info msg="received sandbox exit event container_id:\"034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a\" id:\"034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a\" exit_status:137 exited_at:{seconds:1768874277 nanos:287006394}" monitor_name=podsandbox Jan 20 01:57:57.373156 containerd[1643]: time="2026-01-20T01:57:57.367547696Z" level=info msg="connecting to shim 42d81007ed907ed479e5212fabce9ae0011295ec42aa72306d447fb8bf1ece3f" address="unix:///run/containerd/s/b43870877667e08c180075b50c9a24aeb255e7678a3d9d4745fc9e25faf1929e" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:57:57.441000 audit[8070]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=8070 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:57:57.441000 audit[8070]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe3a79f2e0 a2=0 a3=7ffe3a79f2cc items=0 ppid=3237 pid=8070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:57.441000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:57:57.483000 audit[8070]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=8070 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:57:57.483000 audit[8070]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe3a79f2e0 a2=0 a3=0 items=0 ppid=3237 pid=8070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:57.483000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:57:57.688931 containerd[1643]: 2026-01-20 01:57:27.866 [INFO][7579] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:57:57.688931 containerd[1643]: 2026-01-20 01:57:28.706 [INFO][7579] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--86d7bc7b4f--k5t2j-eth0 calico-kube-controllers-86d7bc7b4f- calico-system 68cbc571-4445-4166-912c-8fdfe252aae2 1775 0 2026-01-20 01:52:15 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:86d7bc7b4f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-86d7bc7b4f-k5t2j eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3b1aeccb903 [] [] }} ContainerID="b4628897cbcdfa66dcaff7e565e3facf6875e967af7b924419c5163f5a61c342" Namespace="calico-system" Pod="calico-kube-controllers-86d7bc7b4f-k5t2j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86d7bc7b4f--k5t2j-" Jan 20 01:57:57.688931 containerd[1643]: 2026-01-20 01:57:28.730 [INFO][7579] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b4628897cbcdfa66dcaff7e565e3facf6875e967af7b924419c5163f5a61c342" Namespace="calico-system" Pod="calico-kube-controllers-86d7bc7b4f-k5t2j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86d7bc7b4f--k5t2j-eth0" Jan 20 01:57:57.688931 containerd[1643]: 2026-01-20 01:57:45.703 [INFO][7695] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b4628897cbcdfa66dcaff7e565e3facf6875e967af7b924419c5163f5a61c342" HandleID="k8s-pod-network.b4628897cbcdfa66dcaff7e565e3facf6875e967af7b924419c5163f5a61c342" Workload="localhost-k8s-calico--kube--controllers--86d7bc7b4f--k5t2j-eth0" Jan 20 01:57:57.688931 containerd[1643]: 2026-01-20 01:57:45.703 [INFO][7695] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b4628897cbcdfa66dcaff7e565e3facf6875e967af7b924419c5163f5a61c342" HandleID="k8s-pod-network.b4628897cbcdfa66dcaff7e565e3facf6875e967af7b924419c5163f5a61c342" Workload="localhost-k8s-calico--kube--controllers--86d7bc7b4f--k5t2j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003f52c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-86d7bc7b4f-k5t2j", "timestamp":"2026-01-20 01:57:45.703313066 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:57:57.688931 containerd[1643]: 2026-01-20 01:57:45.704 [INFO][7695] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:57:57.688931 containerd[1643]: 2026-01-20 01:57:51.117 [INFO][7695] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:57:57.688931 containerd[1643]: 2026-01-20 01:57:51.117 [INFO][7695] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 01:57:57.688931 containerd[1643]: 2026-01-20 01:57:51.851 [INFO][7695] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b4628897cbcdfa66dcaff7e565e3facf6875e967af7b924419c5163f5a61c342" host="localhost" Jan 20 01:57:57.688931 containerd[1643]: 2026-01-20 01:57:52.598 [INFO][7695] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 01:57:57.688931 containerd[1643]: 2026-01-20 01:57:52.792 [INFO][7695] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 01:57:57.688931 containerd[1643]: 2026-01-20 01:57:53.416 [INFO][7695] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 01:57:57.688931 containerd[1643]: 2026-01-20 01:57:53.654 [INFO][7695] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 01:57:57.688931 containerd[1643]: 2026-01-20 01:57:53.654 [INFO][7695] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b4628897cbcdfa66dcaff7e565e3facf6875e967af7b924419c5163f5a61c342" host="localhost" Jan 20 01:57:57.688931 containerd[1643]: 2026-01-20 01:57:53.853 [INFO][7695] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b4628897cbcdfa66dcaff7e565e3facf6875e967af7b924419c5163f5a61c342 Jan 20 01:57:57.688931 containerd[1643]: 2026-01-20 01:57:54.211 [INFO][7695] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b4628897cbcdfa66dcaff7e565e3facf6875e967af7b924419c5163f5a61c342" host="localhost" Jan 20 01:57:57.688931 containerd[1643]: 2026-01-20 01:57:54.920 [INFO][7695] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.b4628897cbcdfa66dcaff7e565e3facf6875e967af7b924419c5163f5a61c342" host="localhost" Jan 20 01:57:57.688931 containerd[1643]: 2026-01-20 01:57:54.953 [INFO][7695] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.b4628897cbcdfa66dcaff7e565e3facf6875e967af7b924419c5163f5a61c342" host="localhost" Jan 20 01:57:57.688931 containerd[1643]: 2026-01-20 01:57:55.036 [INFO][7695] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:57:57.688931 containerd[1643]: 2026-01-20 01:57:55.039 [INFO][7695] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="b4628897cbcdfa66dcaff7e565e3facf6875e967af7b924419c5163f5a61c342" HandleID="k8s-pod-network.b4628897cbcdfa66dcaff7e565e3facf6875e967af7b924419c5163f5a61c342" Workload="localhost-k8s-calico--kube--controllers--86d7bc7b4f--k5t2j-eth0" Jan 20 01:57:57.690299 containerd[1643]: 2026-01-20 01:57:56.112 [INFO][7579] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b4628897cbcdfa66dcaff7e565e3facf6875e967af7b924419c5163f5a61c342" Namespace="calico-system" Pod="calico-kube-controllers-86d7bc7b4f-k5t2j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86d7bc7b4f--k5t2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--86d7bc7b4f--k5t2j-eth0", GenerateName:"calico-kube-controllers-86d7bc7b4f-", Namespace:"calico-system", SelfLink:"", UID:"68cbc571-4445-4166-912c-8fdfe252aae2", ResourceVersion:"1775", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 52, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86d7bc7b4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-86d7bc7b4f-k5t2j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3b1aeccb903", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:57:57.690299 containerd[1643]: 2026-01-20 01:57:56.126 [INFO][7579] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="b4628897cbcdfa66dcaff7e565e3facf6875e967af7b924419c5163f5a61c342" Namespace="calico-system" Pod="calico-kube-controllers-86d7bc7b4f-k5t2j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86d7bc7b4f--k5t2j-eth0" Jan 20 01:57:57.690299 containerd[1643]: 2026-01-20 01:57:56.127 [INFO][7579] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b1aeccb903 ContainerID="b4628897cbcdfa66dcaff7e565e3facf6875e967af7b924419c5163f5a61c342" Namespace="calico-system" Pod="calico-kube-controllers-86d7bc7b4f-k5t2j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86d7bc7b4f--k5t2j-eth0" Jan 20 01:57:57.690299 containerd[1643]: 2026-01-20 01:57:57.436 [INFO][7579] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b4628897cbcdfa66dcaff7e565e3facf6875e967af7b924419c5163f5a61c342" Namespace="calico-system" Pod="calico-kube-controllers-86d7bc7b4f-k5t2j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86d7bc7b4f--k5t2j-eth0" Jan 20 01:57:57.690299 containerd[1643]: 2026-01-20 01:57:57.449 [INFO][7579] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b4628897cbcdfa66dcaff7e565e3facf6875e967af7b924419c5163f5a61c342" Namespace="calico-system" Pod="calico-kube-controllers-86d7bc7b4f-k5t2j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86d7bc7b4f--k5t2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--86d7bc7b4f--k5t2j-eth0", GenerateName:"calico-kube-controllers-86d7bc7b4f-", Namespace:"calico-system", SelfLink:"", UID:"68cbc571-4445-4166-912c-8fdfe252aae2", ResourceVersion:"1775", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 52, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86d7bc7b4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b4628897cbcdfa66dcaff7e565e3facf6875e967af7b924419c5163f5a61c342", Pod:"calico-kube-controllers-86d7bc7b4f-k5t2j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3b1aeccb903", MAC:"9e:13:58:f0:5e:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:57:57.690299 containerd[1643]: 2026-01-20 01:57:57.547 [INFO][7579] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b4628897cbcdfa66dcaff7e565e3facf6875e967af7b924419c5163f5a61c342" Namespace="calico-system" Pod="calico-kube-controllers-86d7bc7b4f-k5t2j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86d7bc7b4f--k5t2j-eth0" Jan 20 01:57:57.887611 containerd[1643]: time="2026-01-20T01:57:57.887550361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ft2sl,Uid:dce8f61b-70a0-47ff-b7a3-9a49a15c7261,Namespace:kube-system,Attempt:0,} returns sandbox id \"93237f0bd8b47aa75fe768c371bd0dae4b9a89687874789059ac2ac53a6d1fff\"" Jan 20 01:57:57.961278 kubelet[3123]: E0120 01:57:57.949158 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:57:58.065963 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 20 01:57:58.066142 kernel: audit: type=1334 audit(1768874277.977:682): prog-id=205 op=LOAD Jan 20 01:57:57.977000 audit: BPF prog-id=205 op=LOAD Jan 20 01:57:58.066271 sshd[8022]: Connection closed by 10.0.0.1 port 54432 Jan 20 01:57:57.990326 sshd-session[7988]: pam_unix(sshd:session): session closed for user core Jan 20 01:57:58.197448 kernel: audit: type=1106 audit(1768874278.002:683): pid=7988 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:57:58.002000 audit[7988]: USER_END pid=7988 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:57:58.052383 systemd[1]: sshd@11-10.0.0.44:22-10.0.0.1:54432.service: Deactivated successfully. Jan 20 01:57:58.101506 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 01:57:58.183455 systemd-logind[1623]: Session 12 logged out. Waiting for processes to exit. Jan 20 01:57:58.006000 audit[7988]: CRED_DISP pid=7988 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:57:58.237450 systemd-logind[1623]: Removed session 12. Jan 20 01:57:58.430588 kernel: audit: type=1104 audit(1768874278.006:684): pid=7988 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:57:58.430925 kernel: audit: type=1131 audit(1768874278.052:685): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.44:22-10.0.0.1:54432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:57:58.430994 kernel: audit: type=1334 audit(1768874278.181:686): prog-id=206 op=LOAD Jan 20 01:57:58.431080 kernel: audit: type=1300 audit(1768874278.181:686): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=7977 pid=7995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:58.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.44:22-10.0.0.1:54432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:57:58.181000 audit: BPF prog-id=206 op=LOAD Jan 20 01:57:58.181000 audit[7995]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=7977 pid=7995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:58.431478 containerd[1643]: time="2026-01-20T01:57:58.375637527Z" level=info msg="CreateContainer within sandbox \"93237f0bd8b47aa75fe768c371bd0dae4b9a89687874789059ac2ac53a6d1fff\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 01:57:58.656262 kernel: audit: type=1327 audit(1768874278.181:686): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939323461396636653031343265376436333662356534646262643565 Jan 20 01:57:58.656466 kernel: audit: type=1334 audit(1768874278.181:687): prog-id=206 op=UNLOAD Jan 20 01:57:58.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939323461396636653031343265376436333662356534646262643565 Jan 20 01:57:58.181000 audit: BPF prog-id=206 op=UNLOAD Jan 20 01:57:58.181000 audit[7995]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=7977 pid=7995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:58.661552 systemd-networkd[1534]: cali3b1aeccb903: Gained IPv6LL Jan 20 01:57:58.674543 containerd[1643]: time="2026-01-20T01:57:58.661284648Z" level=error msg="get state for 9924a9f6e0142e7d636b5e4dbbd5e00977df39e1f29b5745b9f36d60dc1694d2" error="context deadline exceeded" Jan 20 01:57:58.674543 containerd[1643]: time="2026-01-20T01:57:58.661326477Z" level=warning msg="unknown status" status=0 Jan 20 01:57:58.824835 kernel: audit: type=1300 audit(1768874278.181:687): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=7977 pid=7995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:58.824970 kernel: audit: type=1327 audit(1768874278.181:687): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939323461396636653031343265376436333662356534646262643565 Jan 20 01:57:58.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939323461396636653031343265376436333662356534646262643565 Jan 20 01:57:58.197000 audit: BPF prog-id=207 op=LOAD Jan 20 01:57:58.197000 audit[7995]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=7977 pid=7995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:58.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939323461396636653031343265376436333662356534646262643565 Jan 20 01:57:58.459000 audit: BPF prog-id=208 op=LOAD Jan 20 01:57:58.459000 audit[7995]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=7977 pid=7995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:58.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939323461396636653031343265376436333662356534646262643565 Jan 20 01:57:58.470000 audit: BPF prog-id=208 op=UNLOAD Jan 20 01:57:58.470000 audit[7995]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=7977 pid=7995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:58.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939323461396636653031343265376436333662356534646262643565 Jan 20 01:57:58.470000 audit: BPF prog-id=207 op=UNLOAD Jan 20 01:57:58.470000 audit[7995]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=7977 pid=7995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:58.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939323461396636653031343265376436333662356534646262643565 Jan 20 01:57:58.470000 audit: BPF prog-id=209 op=LOAD Jan 20 01:57:58.470000 audit[7995]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=7977 pid=7995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:58.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939323461396636653031343265376436333662356534646262643565 Jan 20 01:57:58.744623 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 01:57:58.851468 systemd[1]: Started cri-containerd-42d81007ed907ed479e5212fabce9ae0011295ec42aa72306d447fb8bf1ece3f.scope - libcontainer container 42d81007ed907ed479e5212fabce9ae0011295ec42aa72306d447fb8bf1ece3f. Jan 20 01:58:00.776000 audit[8124]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=8124 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:58:00.776000 audit[8124]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc74bb3d70 a2=0 a3=7ffc74bb3d5c items=0 ppid=3237 pid=8124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:00.776000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:58:00.806473 kubelet[3123]: E0120 01:58:00.761169 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:58:00.882750 containerd[1643]: time="2026-01-20T01:58:00.866550626Z" level=error msg="get state for 9924a9f6e0142e7d636b5e4dbbd5e00977df39e1f29b5745b9f36d60dc1694d2" error="context deadline exceeded" Jan 20 01:58:00.882750 containerd[1643]: time="2026-01-20T01:58:00.867060118Z" level=warning msg="unknown status" status=0 Jan 20 01:58:00.946567 systemd-networkd[1534]: cali0370bb08c39: Link UP Jan 20 01:58:00.965000 audit: BPF prog-id=210 op=LOAD Jan 20 01:58:00.957000 audit[8124]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=8124 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:58:00.957000 audit[8124]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc74bb3d70 a2=0 a3=0 items=0 ppid=3237 pid=8124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:00.957000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:58:00.965000 audit[8128]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe7d9895a0 a2=98 a3=1fffffffffffffff items=0 ppid=7538 pid=8128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:00.965000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 01:58:00.971000 audit: BPF prog-id=210 op=UNLOAD Jan 20 01:58:00.971000 audit[8128]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe7d989570 a3=0 items=0 ppid=7538 pid=8128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:00.971000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 01:58:00.986000 audit: BPF prog-id=211 op=LOAD Jan 20 01:58:00.986000 audit[8128]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe7d989480 a2=94 a3=3 items=0 ppid=7538 pid=8128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:00.986000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 01:58:00.986000 audit: BPF prog-id=211 op=UNLOAD Jan 20 01:58:00.986000 audit[8128]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe7d989480 a2=94 a3=3 items=0 ppid=7538 pid=8128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:00.986000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 01:58:00.986000 audit: BPF prog-id=212 op=LOAD Jan 20 01:58:00.986000 audit[8128]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe7d9894c0 a2=94 a3=7ffe7d9896a0 items=0 ppid=7538 pid=8128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:00.986000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 01:58:00.988000 audit: BPF prog-id=212 op=UNLOAD Jan 20 01:58:00.988000 audit[8128]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe7d9894c0 a2=94 a3=7ffe7d9896a0 items=0 ppid=7538 pid=8128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:00.988000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 01:58:00.990636 systemd-networkd[1534]: cali0370bb08c39: Gained carrier Jan 20 01:58:01.146645 containerd[1643]: 2026-01-20 01:57:29.887 [INFO][7654] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:58:01.146645 containerd[1643]: 2026-01-20 01:57:30.387 [INFO][7654] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--szpvj-eth0 goldmane-666569f655- calico-system 285811f9-e547-431f-a7b0-90e1226d2f4d 1344 0 2026-01-20 01:51:57 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-szpvj eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0370bb08c39 [] [] }} ContainerID="abba75bf480412f1412b57fb168305e9932482eeb98eb469b4628d020e5365dc" Namespace="calico-system" Pod="goldmane-666569f655-szpvj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--szpvj-" Jan 20 01:58:01.146645 containerd[1643]: 2026-01-20 01:57:30.387 [INFO][7654] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="abba75bf480412f1412b57fb168305e9932482eeb98eb469b4628d020e5365dc" Namespace="calico-system" Pod="goldmane-666569f655-szpvj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--szpvj-eth0" Jan 20 01:58:01.146645 containerd[1643]: 2026-01-20 01:57:45.705 [INFO][7760] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="abba75bf480412f1412b57fb168305e9932482eeb98eb469b4628d020e5365dc" HandleID="k8s-pod-network.abba75bf480412f1412b57fb168305e9932482eeb98eb469b4628d020e5365dc" Workload="localhost-k8s-goldmane--666569f655--szpvj-eth0" Jan 20 01:58:01.146645 containerd[1643]: 2026-01-20 01:57:45.723 [INFO][7760] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="abba75bf480412f1412b57fb168305e9932482eeb98eb469b4628d020e5365dc" HandleID="k8s-pod-network.abba75bf480412f1412b57fb168305e9932482eeb98eb469b4628d020e5365dc" Workload="localhost-k8s-goldmane--666569f655--szpvj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003f1240), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-szpvj", "timestamp":"2026-01-20 01:57:45.705572377 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:58:01.146645 containerd[1643]: 2026-01-20 01:57:46.959 [INFO][7760] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:58:01.146645 containerd[1643]: 2026-01-20 01:57:55.026 [INFO][7760] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:58:01.146645 containerd[1643]: 2026-01-20 01:57:55.027 [INFO][7760] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 01:58:01.146645 containerd[1643]: 2026-01-20 01:57:55.962 [INFO][7760] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.abba75bf480412f1412b57fb168305e9932482eeb98eb469b4628d020e5365dc" host="localhost" Jan 20 01:58:01.146645 containerd[1643]: 2026-01-20 01:57:57.032 [INFO][7760] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 01:58:01.146645 containerd[1643]: 2026-01-20 01:57:57.258 [INFO][7760] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 01:58:01.146645 containerd[1643]: 2026-01-20 01:57:57.459 [INFO][7760] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 01:58:01.146645 containerd[1643]: 2026-01-20 01:57:57.775 [INFO][7760] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 01:58:01.146645 containerd[1643]: 2026-01-20 01:57:57.898 [INFO][7760] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.abba75bf480412f1412b57fb168305e9932482eeb98eb469b4628d020e5365dc" host="localhost" Jan 20 01:58:01.146645 containerd[1643]: 2026-01-20 01:57:57.969 [INFO][7760] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.abba75bf480412f1412b57fb168305e9932482eeb98eb469b4628d020e5365dc Jan 20 01:58:01.146645 containerd[1643]: 2026-01-20 01:57:58.190 [INFO][7760] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.abba75bf480412f1412b57fb168305e9932482eeb98eb469b4628d020e5365dc" host="localhost" Jan 20 01:58:01.146645 containerd[1643]: 2026-01-20 01:57:58.620 [INFO][7760] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.abba75bf480412f1412b57fb168305e9932482eeb98eb469b4628d020e5365dc" host="localhost" Jan 20 01:58:01.146645 containerd[1643]: 2026-01-20 01:57:58.621 [INFO][7760] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.abba75bf480412f1412b57fb168305e9932482eeb98eb469b4628d020e5365dc" host="localhost" Jan 20 01:58:01.146645 containerd[1643]: 2026-01-20 01:57:58.621 [INFO][7760] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:58:01.146645 containerd[1643]: 2026-01-20 01:57:58.621 [INFO][7760] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="abba75bf480412f1412b57fb168305e9932482eeb98eb469b4628d020e5365dc" HandleID="k8s-pod-network.abba75bf480412f1412b57fb168305e9932482eeb98eb469b4628d020e5365dc" Workload="localhost-k8s-goldmane--666569f655--szpvj-eth0" Jan 20 01:58:01.150150 containerd[1643]: 2026-01-20 01:57:58.834 [INFO][7654] cni-plugin/k8s.go 418: Populated endpoint ContainerID="abba75bf480412f1412b57fb168305e9932482eeb98eb469b4628d020e5365dc" Namespace="calico-system" Pod="goldmane-666569f655-szpvj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--szpvj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--szpvj-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"285811f9-e547-431f-a7b0-90e1226d2f4d", ResourceVersion:"1344", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 51, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-szpvj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0370bb08c39", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:58:01.150150 containerd[1643]: 2026-01-20 01:58:00.883 [INFO][7654] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="abba75bf480412f1412b57fb168305e9932482eeb98eb469b4628d020e5365dc" Namespace="calico-system" Pod="goldmane-666569f655-szpvj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--szpvj-eth0" Jan 20 01:58:01.150150 containerd[1643]: 2026-01-20 01:58:00.883 [INFO][7654] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0370bb08c39 ContainerID="abba75bf480412f1412b57fb168305e9932482eeb98eb469b4628d020e5365dc" Namespace="calico-system" Pod="goldmane-666569f655-szpvj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--szpvj-eth0" Jan 20 01:58:01.150150 containerd[1643]: 2026-01-20 01:58:01.045 [INFO][7654] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="abba75bf480412f1412b57fb168305e9932482eeb98eb469b4628d020e5365dc" Namespace="calico-system" Pod="goldmane-666569f655-szpvj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--szpvj-eth0" Jan 20 01:58:01.150150 containerd[1643]: 2026-01-20 01:58:01.046 [INFO][7654] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="abba75bf480412f1412b57fb168305e9932482eeb98eb469b4628d020e5365dc" Namespace="calico-system" Pod="goldmane-666569f655-szpvj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--szpvj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--szpvj-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"285811f9-e547-431f-a7b0-90e1226d2f4d", ResourceVersion:"1344", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 51, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"abba75bf480412f1412b57fb168305e9932482eeb98eb469b4628d020e5365dc", Pod:"goldmane-666569f655-szpvj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0370bb08c39", MAC:"ee:71:24:af:75:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:58:01.150150 containerd[1643]: 2026-01-20 01:58:01.120 [INFO][7654] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="abba75bf480412f1412b57fb168305e9932482eeb98eb469b4628d020e5365dc" Namespace="calico-system" Pod="goldmane-666569f655-szpvj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--szpvj-eth0" Jan 20 01:58:01.173000 audit: BPF prog-id=213 op=LOAD Jan 20 01:58:01.173000 audit[8141]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff9380d2b0 a2=98 a3=3 items=0 ppid=7538 pid=8141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:01.173000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:58:01.198000 audit: BPF prog-id=213 op=UNLOAD Jan 20 01:58:01.198000 audit[8141]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff9380d280 a3=0 items=0 ppid=7538 pid=8141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:01.198000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:58:01.211000 audit: BPF prog-id=214 op=LOAD Jan 20 01:58:01.211000 audit[8141]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff9380d0a0 a2=94 a3=54428f items=0 ppid=7538 pid=8141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:01.211000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:58:01.216000 audit: BPF prog-id=214 op=UNLOAD Jan 20 01:58:01.216000 audit[8141]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff9380d0a0 a2=94 a3=54428f items=0 ppid=7538 pid=8141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:01.216000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:58:01.217000 audit: BPF prog-id=215 op=LOAD Jan 20 01:58:01.217000 audit[8141]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff9380d0d0 a2=94 a3=2 items=0 ppid=7538 pid=8141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:01.217000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:58:01.223000 audit: BPF prog-id=215 op=UNLOAD Jan 20 01:58:01.223000 audit[8141]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff9380d0d0 a2=0 a3=2 items=0 ppid=7538 pid=8141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:01.223000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:58:01.278901 containerd[1643]: time="2026-01-20T01:58:01.278767084Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 20 01:58:01.282535 containerd[1643]: time="2026-01-20T01:58:01.282492352Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Jan 20 01:58:01.397107 containerd[1643]: time="2026-01-20T01:58:01.397041908Z" level=info msg="connecting to shim b4628897cbcdfa66dcaff7e565e3facf6875e967af7b924419c5163f5a61c342" address="unix:///run/containerd/s/00223b048dc376c71abc4003231a8c76268d899ea20be8188194cf123f2aafda" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:58:01.543565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1248852529.mount: Deactivated successfully. Jan 20 01:58:01.615000 audit: BPF prog-id=216 op=LOAD Jan 20 01:58:01.642628 containerd[1643]: time="2026-01-20T01:58:01.640395380Z" level=info msg="Container ac926cfb8ee86f652147eaf64a89610b58c726122cd9f4dc5c3509989f4585ad: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:58:01.762000 audit: BPF prog-id=217 op=LOAD Jan 20 01:58:01.762000 audit[8088]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=8058 pid=8088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:01.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432643831303037656439303765643437396535323132666162636539 Jan 20 01:58:01.762000 audit: BPF prog-id=217 op=UNLOAD Jan 20 01:58:01.762000 audit[8088]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=8058 pid=8088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:01.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432643831303037656439303765643437396535323132666162636539 Jan 20 01:58:01.762000 audit: BPF prog-id=218 op=LOAD Jan 20 01:58:01.762000 audit[8088]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=8058 pid=8088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:01.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432643831303037656439303765643437396535323132666162636539 Jan 20 01:58:01.763000 audit: BPF prog-id=219 op=LOAD Jan 20 01:58:01.763000 audit[8088]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=8058 pid=8088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:01.763000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432643831303037656439303765643437396535323132666162636539 Jan 20 01:58:01.770000 audit: BPF prog-id=219 op=UNLOAD Jan 20 01:58:01.770000 audit[8088]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=8058 pid=8088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:01.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432643831303037656439303765643437396535323132666162636539 Jan 20 01:58:01.770000 audit: BPF prog-id=218 op=UNLOAD Jan 20 01:58:01.770000 audit[8088]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=8058 pid=8088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:01.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432643831303037656439303765643437396535323132666162636539 Jan 20 01:58:01.770000 audit: BPF prog-id=220 op=LOAD Jan 20 01:58:01.770000 audit[8088]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=8058 pid=8088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:01.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432643831303037656439303765643437396535323132666162636539 Jan 20 01:58:01.811976 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 01:58:02.077421 containerd[1643]: time="2026-01-20T01:58:02.077208670Z" level=error msg="ExecSync for \"c9bf082ce41898c53b55b552a810d7329bff7ef745b6d94536608a0ea0f3df40\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Jan 20 01:58:02.113945 systemd-networkd[1534]: cali0370bb08c39: Gained IPv6LL Jan 20 01:58:02.125244 kubelet[3123]: E0120 01:58:02.125172 3123 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="c9bf082ce41898c53b55b552a810d7329bff7ef745b6d94536608a0ea0f3df40" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Jan 20 01:58:02.359078 containerd[1643]: time="2026-01-20T01:58:02.358000456Z" level=info msg="connecting to shim abba75bf480412f1412b57fb168305e9932482eeb98eb469b4628d020e5365dc" address="unix:///run/containerd/s/c8d032144b0e66eace9b8948f9a67d949c4bb911f0cd96817dd738224d5d24f1" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:58:02.443459 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a-rootfs.mount: Deactivated successfully. Jan 20 01:58:02.625156 containerd[1643]: time="2026-01-20T01:58:02.623785910Z" level=info msg="CreateContainer within sandbox \"93237f0bd8b47aa75fe768c371bd0dae4b9a89687874789059ac2ac53a6d1fff\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ac926cfb8ee86f652147eaf64a89610b58c726122cd9f4dc5c3509989f4585ad\"" Jan 20 01:58:02.670034 containerd[1643]: time="2026-01-20T01:58:02.668197004Z" level=info msg="StartContainer for \"ac926cfb8ee86f652147eaf64a89610b58c726122cd9f4dc5c3509989f4585ad\"" Jan 20 01:58:02.693438 systemd[1]: Started cri-containerd-b4628897cbcdfa66dcaff7e565e3facf6875e967af7b924419c5163f5a61c342.scope - libcontainer container b4628897cbcdfa66dcaff7e565e3facf6875e967af7b924419c5163f5a61c342. Jan 20 01:58:02.827588 containerd[1643]: time="2026-01-20T01:58:02.827313431Z" level=info msg="connecting to shim ac926cfb8ee86f652147eaf64a89610b58c726122cd9f4dc5c3509989f4585ad" address="unix:///run/containerd/s/c932b87926a7c692e2f375ef791bc1af96d7cca5782350bc664334061648b0de" protocol=ttrpc version=3 Jan 20 01:58:02.852621 containerd[1643]: time="2026-01-20T01:58:02.847279883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x6f5h,Uid:eeb09d5e-8a63-4fca-910b-ea49fa1ecf05,Namespace:calico-system,Attempt:0,} returns sandbox id \"9924a9f6e0142e7d636b5e4dbbd5e00977df39e1f29b5745b9f36d60dc1694d2\"" Jan 20 01:58:02.859468 containerd[1643]: time="2026-01-20T01:58:02.858861504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 01:58:03.278178 containerd[1643]: time="2026-01-20T01:58:03.277414271Z" level=info msg="shim disconnected" id=034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a namespace=k8s.io Jan 20 01:58:03.278178 containerd[1643]: time="2026-01-20T01:58:03.277460639Z" level=info msg="cleaning up after shim disconnected" id=034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a namespace=k8s.io Jan 20 01:58:03.278178 containerd[1643]: time="2026-01-20T01:58:03.277478202Z" level=info msg="cleaning up dead shim" id=034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a namespace=k8s.io Jan 20 01:58:03.343380 kernel: kauditd_printk_skb: 79 callbacks suppressed Jan 20 01:58:03.343550 kernel: audit: type=1130 audit(1768874283.313:715): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.44:22-10.0.0.1:60636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:58:03.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.44:22-10.0.0.1:60636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:58:03.323051 systemd[1]: Started sshd@12-10.0.0.44:22-10.0.0.1:60636.service - OpenSSH per-connection server daemon (10.0.0.1:60636). Jan 20 01:58:03.349025 containerd[1643]: time="2026-01-20T01:58:03.348922773Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:58:03.402649 containerd[1643]: time="2026-01-20T01:58:03.396291472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hknvv,Uid:05750e4a-a6e9-4631-9a1f-786fc076da7e,Namespace:kube-system,Attempt:0,} returns sandbox id \"42d81007ed907ed479e5212fabce9ae0011295ec42aa72306d447fb8bf1ece3f\"" Jan 20 01:58:03.493126 kubelet[3123]: E0120 01:58:03.470152 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:58:03.627569 containerd[1643]: time="2026-01-20T01:58:03.601162944Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 01:58:03.627569 containerd[1643]: time="2026-01-20T01:58:03.601343068Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 01:58:03.628483 kubelet[3123]: E0120 01:58:03.601557 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:58:03.628483 kubelet[3123]: E0120 01:58:03.601624 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:58:03.628483 kubelet[3123]: E0120 01:58:03.601929 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pb2hv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x6f5h_calico-system(eeb09d5e-8a63-4fca-910b-ea49fa1ecf05): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 01:58:03.718803 containerd[1643]: time="2026-01-20T01:58:03.718629273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 01:58:03.872544 containerd[1643]: time="2026-01-20T01:58:03.850902152Z" level=info msg="CreateContainer within sandbox \"42d81007ed907ed479e5212fabce9ae0011295ec42aa72306d447fb8bf1ece3f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 01:58:03.879146 systemd[1]: Started cri-containerd-abba75bf480412f1412b57fb168305e9932482eeb98eb469b4628d020e5365dc.scope - libcontainer container abba75bf480412f1412b57fb168305e9932482eeb98eb469b4628d020e5365dc. Jan 20 01:58:03.902785 systemd[1]: Started cri-containerd-ac926cfb8ee86f652147eaf64a89610b58c726122cd9f4dc5c3509989f4585ad.scope - libcontainer container ac926cfb8ee86f652147eaf64a89610b58c726122cd9f4dc5c3509989f4585ad. Jan 20 01:58:03.961245 kernel: audit: type=1334 audit(1768874283.918:716): prog-id=221 op=LOAD Jan 20 01:58:03.918000 audit: BPF prog-id=221 op=LOAD Jan 20 01:58:03.965000 audit: BPF prog-id=222 op=LOAD Jan 20 01:58:03.965000 audit[8182]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001dc238 a2=98 a3=0 items=0 ppid=8133 pid=8182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:04.160342 kernel: audit: type=1334 audit(1768874283.965:717): prog-id=222 op=LOAD Jan 20 01:58:04.332473 kernel: audit: type=1300 audit(1768874283.965:717): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001dc238 a2=98 a3=0 items=0 ppid=8133 pid=8182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:04.334250 kernel: audit: type=1327 audit(1768874283.965:717): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234363238383937636263646661363664636166663765353635653366 Jan 20 01:58:04.334342 kernel: audit: type=1334 audit(1768874283.965:718): prog-id=222 op=UNLOAD Jan 20 01:58:04.334810 kernel: audit: type=1300 audit(1768874283.965:718): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=8133 pid=8182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:04.334866 kernel: audit: type=1327 audit(1768874283.965:718): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234363238383937636263646661363664636166663765353635653366 Jan 20 01:58:03.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234363238383937636263646661363664636166663765353635653366 Jan 20 01:58:03.965000 audit: BPF prog-id=222 op=UNLOAD Jan 20 01:58:04.485334 kernel: audit: type=1334 audit(1768874284.314:719): prog-id=223 op=LOAD Jan 20 01:58:04.485398 kernel: audit: type=1300 audit(1768874284.314:719): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001dc488 a2=98 a3=0 items=0 ppid=8133 pid=8182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:03.965000 audit[8182]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=8133 pid=8182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:03.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234363238383937636263646661363664636166663765353635653366 Jan 20 01:58:04.314000 audit: BPF prog-id=223 op=LOAD Jan 20 01:58:04.314000 audit[8182]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001dc488 a2=98 a3=0 items=0 ppid=8133 pid=8182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:04.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234363238383937636263646661363664636166663765353635653366 Jan 20 01:58:04.423000 audit: BPF prog-id=224 op=LOAD Jan 20 01:58:04.423000 audit[8182]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001dc218 a2=98 a3=0 items=0 ppid=8133 pid=8182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:04.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234363238383937636263646661363664636166663765353635653366 Jan 20 01:58:04.430000 audit: BPF prog-id=224 op=UNLOAD Jan 20 01:58:04.430000 audit[8182]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=8133 pid=8182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:04.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234363238383937636263646661363664636166663765353635653366 Jan 20 01:58:04.430000 audit: BPF prog-id=223 op=UNLOAD Jan 20 01:58:04.430000 audit[8182]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=8133 pid=8182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:04.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234363238383937636263646661363664636166663765353635653366 Jan 20 01:58:04.430000 audit: BPF prog-id=225 op=LOAD Jan 20 01:58:04.430000 audit[8182]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001dc6e8 a2=98 a3=0 items=0 ppid=8133 pid=8182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:04.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234363238383937636263646661363664636166663765353635653366 Jan 20 01:58:04.581593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3023897078.mount: Deactivated successfully. Jan 20 01:58:04.619574 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 01:58:04.736745 containerd[1643]: time="2026-01-20T01:58:04.732665481Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:58:04.746152 containerd[1643]: time="2026-01-20T01:58:04.732947627Z" level=info msg="Container fa502b3c3b62c8a0cba0636ff66480a94da6ccb8ab38b54e4096d05aa4bf3e91: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:58:04.803121 containerd[1643]: time="2026-01-20T01:58:04.795590947Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 01:58:04.808212 containerd[1643]: time="2026-01-20T01:58:04.807442049Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 01:58:04.819364 kubelet[3123]: E0120 01:58:04.815319 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:58:04.819364 kubelet[3123]: E0120 01:58:04.815436 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:58:04.819364 kubelet[3123]: E0120 01:58:04.815902 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pb2hv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x6f5h_calico-system(eeb09d5e-8a63-4fca-910b-ea49fa1ecf05): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 01:58:05.844841 kubelet[3123]: E0120 01:58:05.815932 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:58:06.022276 containerd[1643]: time="2026-01-20T01:58:06.019451100Z" level=error msg="get state for b4628897cbcdfa66dcaff7e565e3facf6875e967af7b924419c5163f5a61c342" error="context deadline exceeded" Jan 20 01:58:06.022276 containerd[1643]: time="2026-01-20T01:58:06.019521243Z" level=warning msg="unknown status" status=0 Jan 20 01:58:06.022276 containerd[1643]: time="2026-01-20T01:58:06.020184910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:58:06.086000 audit: BPF prog-id=226 op=LOAD Jan 20 01:58:06.097000 audit: BPF prog-id=227 op=LOAD Jan 20 01:58:06.097000 audit[8219]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=7865 pid=8219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:06.097000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163393236636662386565383666363532313437656166363461383936 Jan 20 01:58:06.099000 audit: BPF prog-id=227 op=UNLOAD Jan 20 01:58:06.099000 audit[8219]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=7865 pid=8219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:06.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163393236636662386565383666363532313437656166363461383936 Jan 20 01:58:06.101000 audit: BPF prog-id=228 op=LOAD Jan 20 01:58:06.101000 audit[8219]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=7865 pid=8219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:06.101000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163393236636662386565383666363532313437656166363461383936 Jan 20 01:58:06.102000 audit: BPF prog-id=229 op=LOAD Jan 20 01:58:06.102000 audit[8219]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=7865 pid=8219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:06.102000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163393236636662386565383666363532313437656166363461383936 Jan 20 01:58:06.102000 audit: BPF prog-id=229 op=UNLOAD Jan 20 01:58:06.102000 audit[8219]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=7865 pid=8219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:06.102000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163393236636662386565383666363532313437656166363461383936 Jan 20 01:58:06.102000 audit: BPF prog-id=228 op=UNLOAD Jan 20 01:58:06.102000 audit[8219]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=7865 pid=8219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:06.102000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163393236636662386565383666363532313437656166363461383936 Jan 20 01:58:06.103000 audit: BPF prog-id=230 op=LOAD Jan 20 01:58:06.103000 audit[8219]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=7865 pid=8219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:06.103000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163393236636662386565383666363532313437656166363461383936 Jan 20 01:58:06.136563 sshd[8243]: Accepted publickey for core from 10.0.0.1 port 60636 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 01:58:06.133000 audit[8243]: USER_ACCT pid=8243 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:06.165000 audit[8243]: CRED_ACQ pid=8243 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:06.166000 audit[8243]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdafe37e20 a2=3 a3=0 items=0 ppid=1 pid=8243 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:06.166000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:58:06.193202 sshd-session[8243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:58:06.203321 containerd[1643]: time="2026-01-20T01:58:06.201512356Z" level=info msg="received sandbox container exit event sandbox_id:\"034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a\" exit_status:137 exited_at:{seconds:1768874277 nanos:287006394}" monitor_name=criService Jan 20 01:58:06.241511 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a-shm.mount: Deactivated successfully. Jan 20 01:58:06.339907 containerd[1643]: time="2026-01-20T01:58:06.205574188Z" level=info msg="CreateContainer within sandbox \"42d81007ed907ed479e5212fabce9ae0011295ec42aa72306d447fb8bf1ece3f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fa502b3c3b62c8a0cba0636ff66480a94da6ccb8ab38b54e4096d05aa4bf3e91\"" Jan 20 01:58:06.418409 systemd-logind[1623]: New session 13 of user core. Jan 20 01:58:06.421445 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 01:58:06.432904 containerd[1643]: time="2026-01-20T01:58:06.423950351Z" level=info msg="StartContainer for \"fa502b3c3b62c8a0cba0636ff66480a94da6ccb8ab38b54e4096d05aa4bf3e91\"" Jan 20 01:58:06.575000 audit[8243]: USER_START pid=8243 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:06.601000 audit[8301]: CRED_ACQ pid=8301 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:06.739098 kubelet[3123]: E0120 01:58:06.737431 3123 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.827s" Jan 20 01:58:06.755552 containerd[1643]: time="2026-01-20T01:58:06.740415973Z" level=info msg="connecting to shim fa502b3c3b62c8a0cba0636ff66480a94da6ccb8ab38b54e4096d05aa4bf3e91" address="unix:///run/containerd/s/b43870877667e08c180075b50c9a24aeb255e7678a3d9d4745fc9e25faf1929e" protocol=ttrpc version=3 Jan 20 01:58:06.803000 audit: BPF prog-id=231 op=LOAD Jan 20 01:58:06.821000 audit: BPF prog-id=232 op=LOAD Jan 20 01:58:06.821000 audit[8240]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001da238 a2=98 a3=0 items=0 ppid=8189 pid=8240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:06.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162626137356266343830343132663134313262353766623136383330 Jan 20 01:58:06.821000 audit: BPF prog-id=232 op=UNLOAD Jan 20 01:58:06.821000 audit[8240]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=8189 pid=8240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:06.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162626137356266343830343132663134313262353766623136383330 Jan 20 01:58:06.821000 audit: BPF prog-id=233 op=LOAD Jan 20 01:58:06.821000 audit[8240]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001da488 a2=98 a3=0 items=0 ppid=8189 pid=8240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:06.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162626137356266343830343132663134313262353766623136383330 Jan 20 01:58:06.821000 audit: BPF prog-id=234 op=LOAD Jan 20 01:58:06.821000 audit[8240]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001da218 a2=98 a3=0 items=0 ppid=8189 pid=8240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:06.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162626137356266343830343132663134313262353766623136383330 Jan 20 01:58:06.821000 audit: BPF prog-id=234 op=UNLOAD Jan 20 01:58:06.821000 audit[8240]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=8189 pid=8240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:06.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162626137356266343830343132663134313262353766623136383330 Jan 20 01:58:06.821000 audit: BPF prog-id=233 op=UNLOAD Jan 20 01:58:06.821000 audit[8240]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=8189 pid=8240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:06.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162626137356266343830343132663134313262353766623136383330 Jan 20 01:58:06.830000 audit: BPF prog-id=235 op=LOAD Jan 20 01:58:06.830000 audit[8240]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001da6e8 a2=98 a3=0 items=0 ppid=8189 pid=8240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:06.830000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162626137356266343830343132663134313262353766623136383330 Jan 20 01:58:07.015345 containerd[1643]: time="2026-01-20T01:58:07.011460836Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:58:07.023534 containerd[1643]: time="2026-01-20T01:58:07.023202849Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:58:07.023534 containerd[1643]: time="2026-01-20T01:58:07.023494915Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 01:58:07.160640 kubelet[3123]: E0120 01:58:07.150625 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:58:07.160640 kubelet[3123]: E0120 01:58:07.150928 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:58:07.160640 kubelet[3123]: E0120 01:58:07.151493 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvpjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74b798b596-r7ptx_calico-apiserver(03f653dd-0210-41e9-9d70-a3905826baa1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:58:07.160640 kubelet[3123]: E0120 01:58:07.155376 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 01:58:07.217029 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 01:58:07.590470 kubelet[3123]: I0120 01:58:07.588071 3123 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" Jan 20 01:58:07.630393 kubelet[3123]: E0120 01:58:07.627322 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:58:07.746051 containerd[1643]: time="2026-01-20T01:58:07.744648537Z" level=info msg="StartContainer for \"ac926cfb8ee86f652147eaf64a89610b58c726122cd9f4dc5c3509989f4585ad\" returns successfully" Jan 20 01:58:07.832858 containerd[1643]: time="2026-01-20T01:58:07.832332595Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 20 01:58:07.962878 systemd[1]: Started cri-containerd-fa502b3c3b62c8a0cba0636ff66480a94da6ccb8ab38b54e4096d05aa4bf3e91.scope - libcontainer container fa502b3c3b62c8a0cba0636ff66480a94da6ccb8ab38b54e4096d05aa4bf3e91. Jan 20 01:58:08.413546 sshd[8301]: Connection closed by 10.0.0.1 port 60636 Jan 20 01:58:08.413845 sshd-session[8243]: pam_unix(sshd:session): session closed for user core Jan 20 01:58:08.539542 kernel: kauditd_printk_skb: 64 callbacks suppressed Jan 20 01:58:08.540312 kernel: audit: type=1106 audit(1768874288.426:745): pid=8243 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:08.426000 audit[8243]: USER_END pid=8243 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:08.443037 systemd-logind[1623]: Session 13 logged out. Waiting for processes to exit. Jan 20 01:58:08.453485 systemd[1]: sshd@12-10.0.0.44:22-10.0.0.1:60636.service: Deactivated successfully. Jan 20 01:58:08.488081 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 01:58:08.528061 systemd-logind[1623]: Removed session 13. Jan 20 01:58:08.426000 audit[8243]: CRED_DISP pid=8243 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:08.651666 kernel: audit: type=1104 audit(1768874288.426:746): pid=8243 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:08.651959 kernel: audit: type=1131 audit(1768874288.454:747): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.44:22-10.0.0.1:60636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:58:08.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.44:22-10.0.0.1:60636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:58:08.701831 containerd[1643]: time="2026-01-20T01:58:08.698061693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d7bc7b4f-k5t2j,Uid:68cbc571-4445-4166-912c-8fdfe252aae2,Namespace:calico-system,Attempt:0,} returns sandbox id \"b4628897cbcdfa66dcaff7e565e3facf6875e967af7b924419c5163f5a61c342\"" Jan 20 01:58:08.734470 containerd[1643]: time="2026-01-20T01:58:08.734366748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 01:58:08.753807 kubelet[3123]: E0120 01:58:08.753413 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:58:08.986155 kernel: audit: type=1334 audit(1768874288.920:748): prog-id=236 op=LOAD Jan 20 01:58:08.989626 kernel: audit: type=1334 audit(1768874288.939:749): prog-id=237 op=LOAD Jan 20 01:58:08.920000 audit: BPF prog-id=236 op=LOAD Jan 20 01:58:08.939000 audit: BPF prog-id=237 op=LOAD Jan 20 01:58:08.990015 kubelet[3123]: I0120 01:58:08.979550 3123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-ft2sl" podStartSLOduration=502.979420381 podStartE2EDuration="8m22.979420381s" podCreationTimestamp="2026-01-20 01:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:58:08.959048644 +0000 UTC m=+503.407497359" watchObservedRunningTime="2026-01-20 01:58:08.979420381 +0000 UTC m=+503.427869116" Jan 20 01:58:08.939000 audit[8324]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000168238 a2=98 a3=0 items=0 ppid=8058 pid=8324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:09.095036 kernel: audit: type=1300 audit(1768874288.939:749): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000168238 a2=98 a3=0 items=0 ppid=8058 pid=8324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:08.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661353032623363336236326338613063626130363336666636363438 Jan 20 01:58:09.176214 containerd[1643]: time="2026-01-20T01:58:09.164579445Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:58:09.224394 containerd[1643]: time="2026-01-20T01:58:09.220504453Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 01:58:09.225128 containerd[1643]: time="2026-01-20T01:58:09.220649751Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 01:58:09.262002 kernel: audit: type=1327 audit(1768874288.939:749): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661353032623363336236326338613063626130363336666636363438 Jan 20 01:58:09.325873 kernel: audit: type=1334 audit(1768874288.939:750): prog-id=237 op=UNLOAD Jan 20 01:58:08.939000 audit: BPF prog-id=237 op=UNLOAD Jan 20 01:58:08.939000 audit[8324]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=8058 pid=8324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:08.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661353032623363336236326338613063626130363336666636363438 Jan 20 01:58:09.454182 kubelet[3123]: E0120 01:58:09.306411 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:58:09.454182 kubelet[3123]: E0120 01:58:09.400408 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:58:09.525025 kernel: audit: type=1300 audit(1768874288.939:750): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=8058 pid=8324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:09.525365 kernel: audit: type=1327 audit(1768874288.939:750): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661353032623363336236326338613063626130363336666636363438 Jan 20 01:58:08.939000 audit: BPF prog-id=238 op=LOAD Jan 20 01:58:08.939000 audit[8324]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000168488 a2=98 a3=0 items=0 ppid=8058 pid=8324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:08.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661353032623363336236326338613063626130363336666636363438 Jan 20 01:58:08.939000 audit: BPF prog-id=239 op=LOAD Jan 20 01:58:08.939000 audit[8324]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000168218 a2=98 a3=0 items=0 ppid=8058 pid=8324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:08.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661353032623363336236326338613063626130363336666636363438 Jan 20 01:58:08.939000 audit: BPF prog-id=239 op=UNLOAD Jan 20 01:58:08.939000 audit[8324]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=8058 pid=8324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:08.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661353032623363336236326338613063626130363336666636363438 Jan 20 01:58:08.939000 audit: BPF prog-id=238 op=UNLOAD Jan 20 01:58:08.939000 audit[8324]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=8058 pid=8324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:08.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661353032623363336236326338613063626130363336666636363438 Jan 20 01:58:08.939000 audit: BPF prog-id=240 op=LOAD Jan 20 01:58:08.939000 audit[8324]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001686e8 a2=98 a3=0 items=0 ppid=8058 pid=8324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:08.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661353032623363336236326338613063626130363336666636363438 Jan 20 01:58:09.543586 kubelet[3123]: E0120 01:58:09.534611 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlmjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-86d7bc7b4f-k5t2j_calico-system(68cbc571-4445-4166-912c-8fdfe252aae2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 01:58:09.544489 kubelet[3123]: E0120 01:58:09.544332 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 01:58:09.545000 audit: BPF prog-id=241 op=LOAD Jan 20 01:58:09.545000 audit[8141]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff9380cf90 a2=94 a3=1 items=0 ppid=7538 pid=8141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:09.545000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:58:09.545000 audit: BPF prog-id=241 op=UNLOAD Jan 20 01:58:09.545000 audit[8141]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff9380cf90 a2=94 a3=1 items=0 ppid=7538 pid=8141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:09.545000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:58:09.605214 containerd[1643]: time="2026-01-20T01:58:09.597803505Z" level=info msg="container event discarded" container=5f3e905fa78b0f7594c614914c72be4ee64f38b1c5e2cd99718fbf94428e1c43 type=CONTAINER_STOPPED_EVENT Jan 20 01:58:09.605214 containerd[1643]: time="2026-01-20T01:58:09.597892996Z" level=info msg="container event discarded" container=63a141b94f3536e20f0d4e8a1fa3bc37fd34cb9ecb2cbfaedbb664f4f2d7d907 type=CONTAINER_STOPPED_EVENT Jan 20 01:58:09.856593 systemd-networkd[1534]: calice5c1d73743: Link DOWN Jan 20 01:58:09.856623 systemd-networkd[1534]: calice5c1d73743: Lost carrier Jan 20 01:58:09.907000 audit: BPF prog-id=242 op=LOAD Jan 20 01:58:09.907000 audit[8141]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff9380cf80 a2=94 a3=4 items=0 ppid=7538 pid=8141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:09.907000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:58:09.907000 audit: BPF prog-id=242 op=UNLOAD Jan 20 01:58:09.907000 audit[8141]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff9380cf80 a2=0 a3=4 items=0 ppid=7538 pid=8141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:09.907000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:58:09.947439 containerd[1643]: time="2026-01-20T01:58:09.946439208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-szpvj,Uid:285811f9-e547-431f-a7b0-90e1226d2f4d,Namespace:calico-system,Attempt:0,} returns sandbox id \"abba75bf480412f1412b57fb168305e9932482eeb98eb469b4628d020e5365dc\"" Jan 20 01:58:09.907000 audit: BPF prog-id=243 op=LOAD Jan 20 01:58:09.907000 audit[8141]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff9380cde0 a2=94 a3=5 items=0 ppid=7538 pid=8141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:09.907000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:58:09.946000 audit: BPF prog-id=243 op=UNLOAD Jan 20 01:58:09.946000 audit[8141]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff9380cde0 a2=0 a3=5 items=0 ppid=7538 pid=8141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:09.946000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:58:09.946000 audit: BPF prog-id=244 op=LOAD Jan 20 01:58:09.946000 audit[8141]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff9380d000 a2=94 a3=6 items=0 ppid=7538 pid=8141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:09.946000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:58:09.946000 audit: BPF prog-id=244 op=UNLOAD Jan 20 01:58:09.946000 audit[8141]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff9380d000 a2=0 a3=6 items=0 ppid=7538 pid=8141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:09.946000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:58:10.042000 audit: BPF prog-id=245 op=LOAD Jan 20 01:58:10.042000 audit[8141]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff9380c7b0 a2=94 a3=88 items=0 ppid=7538 pid=8141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:10.042000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:58:10.043000 audit: BPF prog-id=246 op=LOAD Jan 20 01:58:10.043000 audit[8141]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff9380c630 a2=94 a3=2 items=0 ppid=7538 pid=8141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:10.043000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:58:10.043000 audit: BPF prog-id=246 op=UNLOAD Jan 20 01:58:10.043000 audit[8141]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff9380c660 a2=0 a3=7fff9380c760 items=0 ppid=7538 pid=8141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:10.043000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:58:10.056000 audit: BPF prog-id=245 op=UNLOAD Jan 20 01:58:10.056000 audit[8141]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=1ba8fd10 a2=0 a3=fe9966e6e9aa3ac7 items=0 ppid=7538 pid=8141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:10.056000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:58:10.082464 containerd[1643]: time="2026-01-20T01:58:10.014424305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:58:10.082464 containerd[1643]: time="2026-01-20T01:58:10.027039564Z" level=info msg="container event discarded" container=96ccc7b03b4d1a2fd6e804dc4157249879c5b166f5bf08a2c76e8fcc18810fd2 type=CONTAINER_CREATED_EVENT Jan 20 01:58:10.090961 kubelet[3123]: E0120 01:58:10.088560 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:58:10.093000 audit[8381]: NETFILTER_CFG table=filter:127 family=2 entries=20 op=nft_register_rule pid=8381 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:58:10.093000 audit[8381]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc00edeb30 a2=0 a3=7ffc00edeb1c items=0 ppid=3237 pid=8381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:10.093000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:58:10.140000 audit[8381]: NETFILTER_CFG table=nat:128 family=2 entries=14 op=nft_register_rule pid=8381 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:58:10.140000 audit[8381]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc00edeb30 a2=0 a3=0 items=0 ppid=3237 pid=8381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:10.140000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:58:10.181551 kubelet[3123]: E0120 01:58:10.175003 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 01:58:10.593065 containerd[1643]: time="2026-01-20T01:58:10.563187255Z" level=info msg="container event discarded" container=3c4bffca7804a75f735671ce788b4a16bfb875d17bff5a43848044787c70f531 type=CONTAINER_CREATED_EVENT Jan 20 01:58:10.815000 audit: BPF prog-id=247 op=LOAD Jan 20 01:58:10.815000 audit[8398]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe733a0220 a2=98 a3=1999999999999999 items=0 ppid=7538 pid=8398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:10.815000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 01:58:10.816000 audit: BPF prog-id=247 op=UNLOAD Jan 20 01:58:10.816000 audit[8398]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe733a01f0 a3=0 items=0 ppid=7538 pid=8398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:10.816000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 01:58:10.817000 audit: BPF prog-id=248 op=LOAD Jan 20 01:58:10.817000 audit[8398]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe733a0100 a2=94 a3=ffff items=0 ppid=7538 pid=8398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:10.817000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 01:58:10.817000 audit: BPF prog-id=248 op=UNLOAD Jan 20 01:58:10.817000 audit[8398]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe733a0100 a2=94 a3=ffff items=0 ppid=7538 pid=8398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:10.817000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 01:58:10.818000 audit: BPF prog-id=249 op=LOAD Jan 20 01:58:10.818000 audit[8398]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe733a0140 a2=94 a3=7ffe733a0320 items=0 ppid=7538 pid=8398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:10.831965 containerd[1643]: time="2026-01-20T01:58:10.824802515Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:58:10.818000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 01:58:10.832000 audit: BPF prog-id=249 op=UNLOAD Jan 20 01:58:10.832000 audit[8398]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe733a0140 a2=94 a3=7ffe733a0320 items=0 ppid=7538 pid=8398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:10.832000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 01:58:10.915950 containerd[1643]: time="2026-01-20T01:58:10.905286769Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:58:10.915950 containerd[1643]: time="2026-01-20T01:58:10.905464589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 01:58:10.915950 containerd[1643]: time="2026-01-20T01:58:10.912530396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 01:58:10.916188 kubelet[3123]: E0120 01:58:10.905777 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:58:10.916188 kubelet[3123]: E0120 01:58:10.905841 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:58:10.916188 kubelet[3123]: E0120 01:58:10.906134 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gbt2k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74b798b596-wbvft_calico-apiserver(589f656f-1e0a-4667-bc0d-42908aab3340): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:58:10.916188 kubelet[3123]: E0120 01:58:10.911451 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 01:58:11.159894 containerd[1643]: time="2026-01-20T01:58:11.159597010Z" level=info msg="StartContainer for \"fa502b3c3b62c8a0cba0636ff66480a94da6ccb8ab38b54e4096d05aa4bf3e91\" returns successfully" Jan 20 01:58:11.247162 kubelet[3123]: E0120 01:58:11.226142 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 01:58:11.432957 containerd[1643]: time="2026-01-20T01:58:11.432752304Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:58:11.454342 containerd[1643]: time="2026-01-20T01:58:11.454226178Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 01:58:11.454572 containerd[1643]: time="2026-01-20T01:58:11.454431260Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 01:58:11.468596 kubelet[3123]: E0120 01:58:11.457349 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:58:11.472954 kubelet[3123]: E0120 01:58:11.468652 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:58:11.476653 kubelet[3123]: E0120 01:58:11.473771 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vgfnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-szpvj_calico-system(285811f9-e547-431f-a7b0-90e1226d2f4d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 01:58:11.479790 kubelet[3123]: E0120 01:58:11.477010 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 01:58:12.261756 kubelet[3123]: E0120 01:58:12.261556 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:58:12.324762 kubelet[3123]: E0120 01:58:12.324635 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 01:58:12.633276 kubelet[3123]: I0120 01:58:12.633181 3123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-hknvv" podStartSLOduration=506.633149676 podStartE2EDuration="8m26.633149676s" podCreationTimestamp="2026-01-20 01:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:58:12.622160587 +0000 UTC m=+507.070609313" watchObservedRunningTime="2026-01-20 01:58:12.633149676 +0000 UTC m=+507.081598401" Jan 20 01:58:12.957108 containerd[1643]: 2026-01-20 01:58:09.696 [INFO][8308] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" Jan 20 01:58:12.957108 containerd[1643]: 2026-01-20 01:58:09.710 [INFO][8308] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" iface="eth0" netns="/var/run/netns/cni-0de43dae-422e-44df-077d-61280af45626" Jan 20 01:58:12.957108 containerd[1643]: 2026-01-20 01:58:09.729 [INFO][8308] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" iface="eth0" netns="/var/run/netns/cni-0de43dae-422e-44df-077d-61280af45626" Jan 20 01:58:12.957108 containerd[1643]: 2026-01-20 01:58:10.228 [INFO][8308] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" after=517.627335ms iface="eth0" netns="/var/run/netns/cni-0de43dae-422e-44df-077d-61280af45626" Jan 20 01:58:12.957108 containerd[1643]: 2026-01-20 01:58:10.228 [INFO][8308] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" Jan 20 01:58:12.957108 containerd[1643]: 2026-01-20 01:58:10.229 [INFO][8308] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" Jan 20 01:58:12.957108 containerd[1643]: 2026-01-20 01:58:11.317 [INFO][8397] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" HandleID="k8s-pod-network.034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" Workload="localhost-k8s-whisker--6455dcb75d--z7fp4-eth0" Jan 20 01:58:12.957108 containerd[1643]: 2026-01-20 01:58:11.319 [INFO][8397] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:58:12.957108 containerd[1643]: 2026-01-20 01:58:11.319 [INFO][8397] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:58:12.957108 containerd[1643]: 2026-01-20 01:58:12.702 [INFO][8397] ipam/ipam_plugin.go 455: Released address using handleID ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" HandleID="k8s-pod-network.034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" Workload="localhost-k8s-whisker--6455dcb75d--z7fp4-eth0" Jan 20 01:58:12.957108 containerd[1643]: 2026-01-20 01:58:12.702 [INFO][8397] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" HandleID="k8s-pod-network.034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" Workload="localhost-k8s-whisker--6455dcb75d--z7fp4-eth0" Jan 20 01:58:12.957108 containerd[1643]: 2026-01-20 01:58:12.827 [INFO][8397] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:58:12.957108 containerd[1643]: 2026-01-20 01:58:12.897 [INFO][8308] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" Jan 20 01:58:13.042235 systemd[1]: run-netns-cni\x2d0de43dae\x2d422e\x2d44df\x2d077d\x2d61280af45626.mount: Deactivated successfully. Jan 20 01:58:13.081830 containerd[1643]: time="2026-01-20T01:58:13.067016966Z" level=info msg="TearDown network for sandbox \"034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a\" successfully" Jan 20 01:58:13.081830 containerd[1643]: time="2026-01-20T01:58:13.067130232Z" level=info msg="StopPodSandbox for \"034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a\" returns successfully" Jan 20 01:58:13.157064 containerd[1643]: time="2026-01-20T01:58:13.149763318Z" level=info msg="container event discarded" container=96ccc7b03b4d1a2fd6e804dc4157249879c5b166f5bf08a2c76e8fcc18810fd2 type=CONTAINER_STARTED_EVENT Jan 20 01:58:13.181000 audit[8430]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=8430 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:58:13.181000 audit[8430]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffe4e1b5b0 a2=0 a3=7fffe4e1b59c items=0 ppid=3237 pid=8430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:13.181000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:58:13.207000 audit[8430]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=8430 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:58:13.207000 audit[8430]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fffe4e1b5b0 a2=0 a3=0 items=0 ppid=3237 pid=8430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:13.207000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:58:13.244762 systemd-networkd[1534]: vxlan.calico: Link UP Jan 20 01:58:13.244777 systemd-networkd[1534]: vxlan.calico: Gained carrier Jan 20 01:58:13.309754 containerd[1643]: time="2026-01-20T01:58:13.301794178Z" level=info msg="container event discarded" container=3c4bffca7804a75f735671ce788b4a16bfb875d17bff5a43848044787c70f531 type=CONTAINER_STARTED_EVENT Jan 20 01:58:13.447287 kubelet[3123]: I0120 01:58:13.435537 3123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73120413-751d-4a6a-a82b-54ccc2e8bc99-whisker-ca-bundle\") pod \"73120413-751d-4a6a-a82b-54ccc2e8bc99\" (UID: \"73120413-751d-4a6a-a82b-54ccc2e8bc99\") " Jan 20 01:58:13.447287 kubelet[3123]: I0120 01:58:13.439376 3123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/73120413-751d-4a6a-a82b-54ccc2e8bc99-whisker-backend-key-pair\") pod \"73120413-751d-4a6a-a82b-54ccc2e8bc99\" (UID: \"73120413-751d-4a6a-a82b-54ccc2e8bc99\") " Jan 20 01:58:13.447287 kubelet[3123]: I0120 01:58:13.439434 3123 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fd7k\" (UniqueName: \"kubernetes.io/projected/73120413-751d-4a6a-a82b-54ccc2e8bc99-kube-api-access-4fd7k\") pod \"73120413-751d-4a6a-a82b-54ccc2e8bc99\" (UID: \"73120413-751d-4a6a-a82b-54ccc2e8bc99\") " Jan 20 01:58:13.608407 kubelet[3123]: E0120 01:58:13.608366 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:58:13.732221 kernel: kauditd_printk_skb: 81 callbacks suppressed Jan 20 01:58:13.732390 kernel: audit: type=1130 audit(1768874293.661:778): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.44:22-10.0.0.1:38360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:58:13.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.44:22-10.0.0.1:38360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:58:13.735078 kubelet[3123]: I0120 01:58:13.659079 3123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73120413-751d-4a6a-a82b-54ccc2e8bc99-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "73120413-751d-4a6a-a82b-54ccc2e8bc99" (UID: "73120413-751d-4a6a-a82b-54ccc2e8bc99"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 01:58:13.663878 systemd[1]: Started sshd@13-10.0.0.44:22-10.0.0.1:38360.service - OpenSSH per-connection server daemon (10.0.0.1:38360). Jan 20 01:58:13.729554 systemd[1]: var-lib-kubelet-pods-73120413\x2d751d\x2d4a6a\x2da82b\x2d54ccc2e8bc99-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4fd7k.mount: Deactivated successfully. Jan 20 01:58:13.729863 systemd[1]: var-lib-kubelet-pods-73120413\x2d751d\x2d4a6a\x2da82b\x2d54ccc2e8bc99-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 20 01:58:13.752225 kubelet[3123]: I0120 01:58:13.735942 3123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73120413-751d-4a6a-a82b-54ccc2e8bc99-kube-api-access-4fd7k" (OuterVolumeSpecName: "kube-api-access-4fd7k") pod "73120413-751d-4a6a-a82b-54ccc2e8bc99" (UID: "73120413-751d-4a6a-a82b-54ccc2e8bc99"). InnerVolumeSpecName "kube-api-access-4fd7k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 01:58:13.806624 kubelet[3123]: I0120 01:58:13.802988 3123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73120413-751d-4a6a-a82b-54ccc2e8bc99-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "73120413-751d-4a6a-a82b-54ccc2e8bc99" (UID: "73120413-751d-4a6a-a82b-54ccc2e8bc99"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 01:58:13.844437 kubelet[3123]: I0120 01:58:13.842783 3123 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73120413-751d-4a6a-a82b-54ccc2e8bc99-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 20 01:58:13.880659 kubelet[3123]: I0120 01:58:13.861784 3123 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/73120413-751d-4a6a-a82b-54ccc2e8bc99-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 20 01:58:13.880659 kubelet[3123]: I0120 01:58:13.861846 3123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4fd7k\" (UniqueName: \"kubernetes.io/projected/73120413-751d-4a6a-a82b-54ccc2e8bc99-kube-api-access-4fd7k\") on node \"localhost\" DevicePath \"\"" Jan 20 01:58:13.901000 audit[8438]: NETFILTER_CFG table=filter:131 family=2 entries=20 op=nft_register_rule pid=8438 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:58:13.926790 kernel: audit: type=1325 audit(1768874293.901:779): table=filter:131 family=2 entries=20 op=nft_register_rule pid=8438 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:58:13.988771 kernel: audit: type=1300 audit(1768874293.901:779): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc8d1c33e0 a2=0 a3=7ffc8d1c33cc items=0 ppid=3237 pid=8438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:13.901000 audit[8438]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc8d1c33e0 a2=0 a3=7ffc8d1c33cc items=0 ppid=3237 pid=8438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:13.992387 systemd[1]: Removed slice kubepods-besteffort-pod73120413_751d_4a6a_a82b_54ccc2e8bc99.slice - libcontainer container kubepods-besteffort-pod73120413_751d_4a6a_a82b_54ccc2e8bc99.slice. Jan 20 01:58:14.030290 kernel: audit: type=1327 audit(1768874293.901:779): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:58:13.901000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:58:13.944000 audit[8438]: NETFILTER_CFG table=nat:132 family=2 entries=14 op=nft_register_rule pid=8438 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:58:14.064776 kernel: audit: type=1325 audit(1768874293.944:780): table=nat:132 family=2 entries=14 op=nft_register_rule pid=8438 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:58:13.944000 audit[8438]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc8d1c33e0 a2=0 a3=0 items=0 ppid=3237 pid=8438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:14.130050 kernel: audit: type=1300 audit(1768874293.944:780): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc8d1c33e0 a2=0 a3=0 items=0 ppid=3237 pid=8438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:14.156770 kernel: audit: type=1327 audit(1768874293.944:780): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:58:13.944000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:58:14.402131 kernel: audit: type=1334 audit(1768874294.350:781): prog-id=250 op=LOAD Jan 20 01:58:14.350000 audit: BPF prog-id=250 op=LOAD Jan 20 01:58:14.387908 systemd-networkd[1534]: vxlan.calico: Gained IPv6LL Jan 20 01:58:14.408830 kernel: audit: type=1300 audit(1768874294.350:781): arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd12820b10 a2=98 a3=0 items=0 ppid=7538 pid=8452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:14.408950 kernel: audit: type=1327 audit(1768874294.350:781): proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:58:14.350000 audit[8452]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd12820b10 a2=98 a3=0 items=0 ppid=7538 pid=8452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:14.350000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:58:14.350000 audit: BPF prog-id=250 op=UNLOAD Jan 20 01:58:14.350000 audit[8452]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd12820ae0 a3=0 items=0 ppid=7538 pid=8452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:14.350000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:58:14.350000 audit: BPF prog-id=251 op=LOAD Jan 20 01:58:14.350000 audit[8452]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd12820920 a2=94 a3=54428f items=0 ppid=7538 pid=8452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:14.350000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:58:14.350000 audit: BPF prog-id=251 op=UNLOAD Jan 20 01:58:14.350000 audit[8452]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd12820920 a2=94 a3=54428f items=0 ppid=7538 pid=8452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:14.350000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:58:14.350000 audit: BPF prog-id=252 op=LOAD Jan 20 01:58:14.350000 audit[8452]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd12820950 a2=94 a3=2 items=0 ppid=7538 pid=8452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:14.350000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:58:14.350000 audit: BPF prog-id=252 op=UNLOAD Jan 20 01:58:14.349000 audit[8450]: NETFILTER_CFG table=filter:133 family=2 entries=17 op=nft_register_rule pid=8450 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:58:14.349000 audit[8450]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc585bd580 a2=0 a3=7ffc585bd56c items=0 ppid=3237 pid=8450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:14.349000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:58:14.350000 audit[8452]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd12820950 a2=0 a3=2 items=0 ppid=7538 pid=8452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:14.350000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:58:14.390000 audit: BPF prog-id=253 op=LOAD Jan 20 01:58:14.390000 audit[8452]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd12820700 a2=94 a3=4 items=0 ppid=7538 pid=8452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:14.390000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:58:14.390000 audit: BPF prog-id=253 op=UNLOAD Jan 20 01:58:14.390000 audit[8452]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd12820700 a2=94 a3=4 items=0 ppid=7538 pid=8452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:14.390000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:58:14.390000 audit: BPF prog-id=254 op=LOAD Jan 20 01:58:14.390000 audit[8452]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd12820800 a2=94 a3=7ffd12820980 items=0 ppid=7538 pid=8452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:14.390000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:58:14.390000 audit: BPF prog-id=254 op=UNLOAD Jan 20 01:58:14.390000 audit[8452]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd12820800 a2=0 a3=7ffd12820980 items=0 ppid=7538 pid=8452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:14.390000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:58:14.395000 audit[8450]: NETFILTER_CFG table=nat:134 family=2 entries=35 op=nft_register_chain pid=8450 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:58:14.395000 audit[8450]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc585bd580 a2=0 a3=7ffc585bd56c items=0 ppid=3237 pid=8450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:14.395000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:58:14.494000 audit: BPF prog-id=255 op=LOAD Jan 20 01:58:14.494000 audit[8452]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd1281ff30 a2=94 a3=2 items=0 ppid=7538 pid=8452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:14.494000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:58:14.494000 audit: BPF prog-id=255 op=UNLOAD Jan 20 01:58:14.494000 audit[8452]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd1281ff30 a2=0 a3=2 items=0 ppid=7538 pid=8452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:14.494000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:58:14.494000 audit: BPF prog-id=256 op=LOAD Jan 20 01:58:14.494000 audit[8452]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd12820030 a2=94 a3=30 items=0 ppid=7538 pid=8452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:14.494000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:58:14.594103 kubelet[3123]: E0120 01:58:14.587405 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:58:14.924000 audit: BPF prog-id=257 op=LOAD Jan 20 01:58:14.924000 audit[8462]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd532348b0 a2=98 a3=0 items=0 ppid=7538 pid=8462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:14.924000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:58:14.924000 audit: BPF prog-id=257 op=UNLOAD Jan 20 01:58:14.924000 audit[8462]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd53234880 a3=0 items=0 ppid=7538 pid=8462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:14.924000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:58:14.924000 audit: BPF prog-id=258 op=LOAD Jan 20 01:58:14.924000 audit[8462]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd532346a0 a2=94 a3=54428f items=0 ppid=7538 pid=8462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:14.924000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:58:14.924000 audit: BPF prog-id=258 op=UNLOAD Jan 20 01:58:14.924000 audit[8462]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd532346a0 a2=94 a3=54428f items=0 ppid=7538 pid=8462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:14.924000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:58:14.924000 audit: BPF prog-id=259 op=LOAD Jan 20 01:58:14.924000 audit[8462]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd532346d0 a2=94 a3=2 items=0 ppid=7538 pid=8462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:14.924000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:58:14.924000 audit: BPF prog-id=259 op=UNLOAD Jan 20 01:58:14.924000 audit[8462]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd532346d0 a2=0 a3=2 items=0 ppid=7538 pid=8462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:14.924000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:58:15.251000 audit[8464]: NETFILTER_CFG table=filter:135 family=2 entries=14 op=nft_register_rule pid=8464 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:58:15.251000 audit[8464]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe5594d430 a2=0 a3=7ffe5594d41c items=0 ppid=3237 pid=8464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:15.251000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:58:15.288000 audit[8464]: NETFILTER_CFG table=nat:136 family=2 entries=20 op=nft_register_rule pid=8464 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:58:15.288000 audit[8464]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe5594d430 a2=0 a3=7ffe5594d41c items=0 ppid=3237 pid=8464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:15.288000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:58:15.299000 audit[8436]: USER_ACCT pid=8436 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:15.309723 sshd[8436]: Accepted publickey for core from 10.0.0.1 port 38360 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 01:58:15.359000 audit[8436]: CRED_ACQ pid=8436 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:15.359000 audit[8436]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffccb4aabf0 a2=3 a3=0 items=0 ppid=1 pid=8436 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:15.359000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:58:15.379386 sshd-session[8436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:58:15.544964 systemd-logind[1623]: New session 14 of user core. Jan 20 01:58:15.667356 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 01:58:15.741000 audit[8436]: USER_START pid=8436 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:15.757000 audit[8465]: CRED_ACQ pid=8465 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:15.853506 kubelet[3123]: I0120 01:58:15.849087 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trdb7\" (UniqueName: \"kubernetes.io/projected/44944462-7130-49ee-b7c5-4cb73dea6058-kube-api-access-trdb7\") pod \"whisker-8bc549748-txp25\" (UID: \"44944462-7130-49ee-b7c5-4cb73dea6058\") " pod="calico-system/whisker-8bc549748-txp25" Jan 20 01:58:15.853506 kubelet[3123]: I0120 01:58:15.849229 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44944462-7130-49ee-b7c5-4cb73dea6058-whisker-ca-bundle\") pod \"whisker-8bc549748-txp25\" (UID: \"44944462-7130-49ee-b7c5-4cb73dea6058\") " pod="calico-system/whisker-8bc549748-txp25" Jan 20 01:58:15.853506 kubelet[3123]: I0120 01:58:15.849280 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/44944462-7130-49ee-b7c5-4cb73dea6058-whisker-backend-key-pair\") pod \"whisker-8bc549748-txp25\" (UID: \"44944462-7130-49ee-b7c5-4cb73dea6058\") " pod="calico-system/whisker-8bc549748-txp25" Jan 20 01:58:16.261746 systemd[1]: Created slice kubepods-besteffort-pod44944462_7130_49ee_b7c5_4cb73dea6058.slice - libcontainer container kubepods-besteffort-pod44944462_7130_49ee_b7c5_4cb73dea6058.slice. Jan 20 01:58:16.312745 kubelet[3123]: I0120 01:58:16.306558 3123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73120413-751d-4a6a-a82b-54ccc2e8bc99" path="/var/lib/kubelet/pods/73120413-751d-4a6a-a82b-54ccc2e8bc99/volumes" Jan 20 01:58:16.687867 containerd[1643]: time="2026-01-20T01:58:16.687612715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8bc549748-txp25,Uid:44944462-7130-49ee-b7c5-4cb73dea6058,Namespace:calico-system,Attempt:0,}" Jan 20 01:58:17.120000 audit: BPF prog-id=260 op=LOAD Jan 20 01:58:17.120000 audit[8462]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd53234590 a2=94 a3=1 items=0 ppid=7538 pid=8462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:17.120000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:58:17.120000 audit: BPF prog-id=260 op=UNLOAD Jan 20 01:58:17.120000 audit[8462]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd53234590 a2=94 a3=1 items=0 ppid=7538 pid=8462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:17.120000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:58:17.149811 sshd[8465]: Connection closed by 10.0.0.1 port 38360 Jan 20 01:58:17.149571 sshd-session[8436]: pam_unix(sshd:session): session closed for user core Jan 20 01:58:17.158000 audit[8436]: USER_END pid=8436 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:17.158000 audit[8436]: CRED_DISP pid=8436 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:17.196642 systemd-logind[1623]: Session 14 logged out. Waiting for processes to exit. Jan 20 01:58:17.208044 systemd[1]: sshd@13-10.0.0.44:22-10.0.0.1:38360.service: Deactivated successfully. Jan 20 01:58:17.213381 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 01:58:17.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.44:22-10.0.0.1:38360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:58:17.222063 systemd-logind[1623]: Removed session 14. Jan 20 01:58:17.237000 audit: BPF prog-id=261 op=LOAD Jan 20 01:58:17.237000 audit[8462]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd53234580 a2=94 a3=4 items=0 ppid=7538 pid=8462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:17.237000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:58:17.237000 audit: BPF prog-id=261 op=UNLOAD Jan 20 01:58:17.237000 audit[8462]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd53234580 a2=0 a3=4 items=0 ppid=7538 pid=8462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:17.237000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:58:17.238000 audit: BPF prog-id=262 op=LOAD Jan 20 01:58:17.238000 audit[8462]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd532343e0 a2=94 a3=5 items=0 ppid=7538 pid=8462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:17.238000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:58:17.238000 audit: BPF prog-id=262 op=UNLOAD Jan 20 01:58:17.238000 audit[8462]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd532343e0 a2=0 a3=5 items=0 ppid=7538 pid=8462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:17.238000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:58:17.238000 audit: BPF prog-id=263 op=LOAD Jan 20 01:58:17.238000 audit[8462]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd53234600 a2=94 a3=6 items=0 ppid=7538 pid=8462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:17.238000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:58:17.238000 audit: BPF prog-id=263 op=UNLOAD Jan 20 01:58:17.238000 audit[8462]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd53234600 a2=0 a3=6 items=0 ppid=7538 pid=8462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:17.238000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:58:17.238000 audit: BPF prog-id=264 op=LOAD Jan 20 01:58:17.238000 audit[8462]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd53233db0 a2=94 a3=88 items=0 ppid=7538 pid=8462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:17.238000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:58:17.239000 audit: BPF prog-id=265 op=LOAD Jan 20 01:58:17.239000 audit[8462]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd53233c30 a2=94 a3=2 items=0 ppid=7538 pid=8462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:17.239000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:58:17.239000 audit: BPF prog-id=265 op=UNLOAD Jan 20 01:58:17.239000 audit[8462]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd53233c60 a2=0 a3=7ffd53233d60 items=0 ppid=7538 pid=8462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:17.239000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:58:17.239000 audit: BPF prog-id=264 op=UNLOAD Jan 20 01:58:17.239000 audit[8462]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=31961d10 a2=0 a3=82ab2ed4366ea48a items=0 ppid=7538 pid=8462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:17.239000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:58:17.291000 audit: BPF prog-id=256 op=UNLOAD Jan 20 01:58:17.291000 audit[7538]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c0009c6240 a2=0 a3=0 items=0 ppid=7536 pid=7538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:17.291000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 20 01:58:18.024766 systemd-networkd[1534]: calia3036427cb5: Link UP Jan 20 01:58:18.029160 systemd-networkd[1534]: calia3036427cb5: Gained carrier Jan 20 01:58:18.165793 containerd[1643]: 2026-01-20 01:58:17.108 [INFO][8478] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--8bc549748--txp25-eth0 whisker-8bc549748- calico-system 44944462-7130-49ee-b7c5-4cb73dea6058 2094 0 2026-01-20 01:58:15 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8bc549748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-8bc549748-txp25 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia3036427cb5 [] [] }} ContainerID="e2931e79b3bcff5ee91130373062f5ef39acfc966ff11600cddef925ff2ec4b7" Namespace="calico-system" Pod="whisker-8bc549748-txp25" WorkloadEndpoint="localhost-k8s-whisker--8bc549748--txp25-" Jan 20 01:58:18.165793 containerd[1643]: 2026-01-20 01:58:17.109 [INFO][8478] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e2931e79b3bcff5ee91130373062f5ef39acfc966ff11600cddef925ff2ec4b7" Namespace="calico-system" Pod="whisker-8bc549748-txp25" WorkloadEndpoint="localhost-k8s-whisker--8bc549748--txp25-eth0" Jan 20 01:58:18.165793 containerd[1643]: 2026-01-20 01:58:17.385 [INFO][8495] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e2931e79b3bcff5ee91130373062f5ef39acfc966ff11600cddef925ff2ec4b7" HandleID="k8s-pod-network.e2931e79b3bcff5ee91130373062f5ef39acfc966ff11600cddef925ff2ec4b7" Workload="localhost-k8s-whisker--8bc549748--txp25-eth0" Jan 20 01:58:18.165793 containerd[1643]: 2026-01-20 01:58:17.389 [INFO][8495] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e2931e79b3bcff5ee91130373062f5ef39acfc966ff11600cddef925ff2ec4b7" HandleID="k8s-pod-network.e2931e79b3bcff5ee91130373062f5ef39acfc966ff11600cddef925ff2ec4b7" Workload="localhost-k8s-whisker--8bc549748--txp25-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00043afb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-8bc549748-txp25", "timestamp":"2026-01-20 01:58:17.385551658 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:58:18.165793 containerd[1643]: 2026-01-20 01:58:17.389 [INFO][8495] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:58:18.165793 containerd[1643]: 2026-01-20 01:58:17.389 [INFO][8495] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:58:18.165793 containerd[1643]: 2026-01-20 01:58:17.389 [INFO][8495] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 01:58:18.165793 containerd[1643]: 2026-01-20 01:58:17.457 [INFO][8495] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e2931e79b3bcff5ee91130373062f5ef39acfc966ff11600cddef925ff2ec4b7" host="localhost" Jan 20 01:58:18.165793 containerd[1643]: 2026-01-20 01:58:17.511 [INFO][8495] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 01:58:18.165793 containerd[1643]: 2026-01-20 01:58:17.629 [INFO][8495] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 01:58:18.165793 containerd[1643]: 2026-01-20 01:58:17.659 [INFO][8495] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 01:58:18.165793 containerd[1643]: 2026-01-20 01:58:17.736 [INFO][8495] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 01:58:18.165793 containerd[1643]: 2026-01-20 01:58:17.737 [INFO][8495] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e2931e79b3bcff5ee91130373062f5ef39acfc966ff11600cddef925ff2ec4b7" host="localhost" Jan 20 01:58:18.165793 containerd[1643]: 2026-01-20 01:58:17.750 [INFO][8495] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e2931e79b3bcff5ee91130373062f5ef39acfc966ff11600cddef925ff2ec4b7 Jan 20 01:58:18.165793 containerd[1643]: 2026-01-20 01:58:17.869 [INFO][8495] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e2931e79b3bcff5ee91130373062f5ef39acfc966ff11600cddef925ff2ec4b7" host="localhost" Jan 20 01:58:18.165793 containerd[1643]: 2026-01-20 01:58:17.995 [INFO][8495] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.e2931e79b3bcff5ee91130373062f5ef39acfc966ff11600cddef925ff2ec4b7" host="localhost" Jan 20 01:58:18.165793 containerd[1643]: 2026-01-20 01:58:17.995 [INFO][8495] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.e2931e79b3bcff5ee91130373062f5ef39acfc966ff11600cddef925ff2ec4b7" host="localhost" Jan 20 01:58:18.165793 containerd[1643]: 2026-01-20 01:58:17.995 [INFO][8495] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:58:18.165793 containerd[1643]: 2026-01-20 01:58:17.996 [INFO][8495] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="e2931e79b3bcff5ee91130373062f5ef39acfc966ff11600cddef925ff2ec4b7" HandleID="k8s-pod-network.e2931e79b3bcff5ee91130373062f5ef39acfc966ff11600cddef925ff2ec4b7" Workload="localhost-k8s-whisker--8bc549748--txp25-eth0" Jan 20 01:58:18.169267 containerd[1643]: 2026-01-20 01:58:18.007 [INFO][8478] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e2931e79b3bcff5ee91130373062f5ef39acfc966ff11600cddef925ff2ec4b7" Namespace="calico-system" Pod="whisker-8bc549748-txp25" WorkloadEndpoint="localhost-k8s-whisker--8bc549748--txp25-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--8bc549748--txp25-eth0", GenerateName:"whisker-8bc549748-", Namespace:"calico-system", SelfLink:"", UID:"44944462-7130-49ee-b7c5-4cb73dea6058", ResourceVersion:"2094", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 58, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8bc549748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-8bc549748-txp25", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia3036427cb5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:58:18.169267 containerd[1643]: 2026-01-20 01:58:18.009 [INFO][8478] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="e2931e79b3bcff5ee91130373062f5ef39acfc966ff11600cddef925ff2ec4b7" Namespace="calico-system" Pod="whisker-8bc549748-txp25" WorkloadEndpoint="localhost-k8s-whisker--8bc549748--txp25-eth0" Jan 20 01:58:18.169267 containerd[1643]: 2026-01-20 01:58:18.010 [INFO][8478] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia3036427cb5 ContainerID="e2931e79b3bcff5ee91130373062f5ef39acfc966ff11600cddef925ff2ec4b7" Namespace="calico-system" Pod="whisker-8bc549748-txp25" WorkloadEndpoint="localhost-k8s-whisker--8bc549748--txp25-eth0" Jan 20 01:58:18.169267 containerd[1643]: 2026-01-20 01:58:18.031 [INFO][8478] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e2931e79b3bcff5ee91130373062f5ef39acfc966ff11600cddef925ff2ec4b7" Namespace="calico-system" Pod="whisker-8bc549748-txp25" WorkloadEndpoint="localhost-k8s-whisker--8bc549748--txp25-eth0" Jan 20 01:58:18.169267 containerd[1643]: 2026-01-20 01:58:18.033 [INFO][8478] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e2931e79b3bcff5ee91130373062f5ef39acfc966ff11600cddef925ff2ec4b7" Namespace="calico-system" Pod="whisker-8bc549748-txp25" WorkloadEndpoint="localhost-k8s-whisker--8bc549748--txp25-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--8bc549748--txp25-eth0", GenerateName:"whisker-8bc549748-", Namespace:"calico-system", SelfLink:"", UID:"44944462-7130-49ee-b7c5-4cb73dea6058", ResourceVersion:"2094", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 58, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8bc549748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e2931e79b3bcff5ee91130373062f5ef39acfc966ff11600cddef925ff2ec4b7", Pod:"whisker-8bc549748-txp25", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia3036427cb5", MAC:"4a:0d:dd:2b:d2:21", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:58:18.169267 containerd[1643]: 2026-01-20 01:58:18.148 [INFO][8478] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e2931e79b3bcff5ee91130373062f5ef39acfc966ff11600cddef925ff2ec4b7" Namespace="calico-system" Pod="whisker-8bc549748-txp25" WorkloadEndpoint="localhost-k8s-whisker--8bc549748--txp25-eth0" Jan 20 01:58:18.458999 containerd[1643]: time="2026-01-20T01:58:18.458654201Z" level=info msg="connecting to shim e2931e79b3bcff5ee91130373062f5ef39acfc966ff11600cddef925ff2ec4b7" address="unix:///run/containerd/s/158188f84dbefb219dd886b31ba716bf187f1b415e6753fdabe88b9a0cb320ec" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:58:18.835330 systemd[1]: Started cri-containerd-e2931e79b3bcff5ee91130373062f5ef39acfc966ff11600cddef925ff2ec4b7.scope - libcontainer container e2931e79b3bcff5ee91130373062f5ef39acfc966ff11600cddef925ff2ec4b7. Jan 20 01:58:19.102988 kernel: kauditd_printk_skb: 115 callbacks suppressed Jan 20 01:58:19.103094 kernel: audit: type=1325 audit(1768874299.058:825): table=nat:137 family=2 entries=15 op=nft_register_chain pid=8580 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 01:58:19.058000 audit[8580]: NETFILTER_CFG table=nat:137 family=2 entries=15 op=nft_register_chain pid=8580 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 01:58:19.068000 audit[8583]: NETFILTER_CFG table=raw:138 family=2 entries=21 op=nft_register_chain pid=8583 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 01:58:19.137925 systemd-networkd[1534]: calia3036427cb5: Gained IPv6LL Jan 20 01:58:19.154104 kernel: audit: type=1325 audit(1768874299.068:826): table=raw:138 family=2 entries=21 op=nft_register_chain pid=8583 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 01:58:19.068000 audit[8583]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffd1da2cc30 a2=0 a3=55d255011000 items=0 ppid=7538 pid=8583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:19.068000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 01:58:19.058000 audit[8580]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffd9f5648d0 a2=0 a3=7ffd9f5648bc items=0 ppid=7538 pid=8580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:19.058000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 01:58:19.095000 audit[8596]: NETFILTER_CFG table=mangle:139 family=2 entries=16 op=nft_register_chain pid=8596 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 01:58:19.190943 kernel: audit: type=1300 audit(1768874299.068:826): arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffd1da2cc30 a2=0 a3=55d255011000 items=0 ppid=7538 pid=8583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:19.191017 kernel: audit: type=1327 audit(1768874299.068:826): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 01:58:19.191045 kernel: audit: type=1300 audit(1768874299.058:825): arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffd9f5648d0 a2=0 a3=7ffd9f5648bc items=0 ppid=7538 pid=8580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:19.191071 kernel: audit: type=1327 audit(1768874299.058:825): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 01:58:19.191093 kernel: audit: type=1325 audit(1768874299.095:827): table=mangle:139 family=2 entries=16 op=nft_register_chain pid=8596 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 01:58:19.191113 kernel: audit: type=1300 audit(1768874299.095:827): arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff2a410300 a2=0 a3=5628d2ef8000 items=0 ppid=7538 pid=8596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:19.095000 audit[8596]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff2a410300 a2=0 a3=5628d2ef8000 items=0 ppid=7538 pid=8596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:19.246551 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 01:58:19.417427 kernel: audit: type=1327 audit(1768874299.095:827): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 01:58:19.447594 kernel: audit: type=1334 audit(1768874299.095:828): prog-id=266 op=LOAD Jan 20 01:58:19.095000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 01:58:19.095000 audit: BPF prog-id=266 op=LOAD Jan 20 01:58:19.103000 audit: BPF prog-id=267 op=LOAD Jan 20 01:58:19.103000 audit[8589]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000152238 a2=98 a3=0 items=0 ppid=8575 pid=8589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:19.103000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532393331653739623362636666356565393131333033373330363266 Jan 20 01:58:19.103000 audit: BPF prog-id=267 op=UNLOAD Jan 20 01:58:19.103000 audit[8589]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=8575 pid=8589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:19.103000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532393331653739623362636666356565393131333033373330363266 Jan 20 01:58:19.103000 audit: BPF prog-id=268 op=LOAD Jan 20 01:58:19.103000 audit[8589]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000152488 a2=98 a3=0 items=0 ppid=8575 pid=8589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:19.103000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532393331653739623362636666356565393131333033373330363266 Jan 20 01:58:19.112000 audit: BPF prog-id=269 op=LOAD Jan 20 01:58:19.112000 audit[8589]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000152218 a2=98 a3=0 items=0 ppid=8575 pid=8589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:19.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532393331653739623362636666356565393131333033373330363266 Jan 20 01:58:19.121000 audit: BPF prog-id=269 op=UNLOAD Jan 20 01:58:19.121000 audit[8589]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=8575 pid=8589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:19.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532393331653739623362636666356565393131333033373330363266 Jan 20 01:58:19.121000 audit: BPF prog-id=268 op=UNLOAD Jan 20 01:58:19.121000 audit[8589]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=8575 pid=8589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:19.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532393331653739623362636666356565393131333033373330363266 Jan 20 01:58:19.130000 audit: BPF prog-id=270 op=LOAD Jan 20 01:58:19.130000 audit[8589]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001526e8 a2=98 a3=0 items=0 ppid=8575 pid=8589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:19.130000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532393331653739623362636666356565393131333033373330363266 Jan 20 01:58:19.280000 audit[8602]: NETFILTER_CFG table=filter:140 family=2 entries=278 op=nft_register_chain pid=8602 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 01:58:19.280000 audit[8602]: SYSCALL arch=c000003e syscall=46 success=yes exit=162296 a0=3 a1=7ffe8725d370 a2=0 a3=5581f8352000 items=0 ppid=7538 pid=8602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:19.280000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 01:58:20.135019 kubelet[3123]: E0120 01:58:20.132304 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:58:20.148650 containerd[1643]: time="2026-01-20T01:58:20.140923965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8bc549748-txp25,Uid:44944462-7130-49ee-b7c5-4cb73dea6058,Namespace:calico-system,Attempt:0,} returns sandbox id \"e2931e79b3bcff5ee91130373062f5ef39acfc966ff11600cddef925ff2ec4b7\"" Jan 20 01:58:20.440636 containerd[1643]: time="2026-01-20T01:58:20.433204441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 01:58:20.453366 kubelet[3123]: E0120 01:58:20.451969 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:58:20.554000 audit[8628]: NETFILTER_CFG table=filter:141 family=2 entries=73 op=nft_register_chain pid=8628 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 01:58:20.554000 audit[8628]: SYSCALL arch=c000003e syscall=46 success=yes exit=38792 a0=3 a1=7fffc10a1630 a2=0 a3=7fffc10a161c items=0 ppid=7538 pid=8628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:20.554000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 01:58:20.661464 containerd[1643]: time="2026-01-20T01:58:20.659402840Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:58:20.699649 containerd[1643]: time="2026-01-20T01:58:20.699493896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 01:58:20.700809 containerd[1643]: time="2026-01-20T01:58:20.700095166Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 01:58:20.707613 kubelet[3123]: E0120 01:58:20.706838 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:58:20.707613 kubelet[3123]: E0120 01:58:20.706955 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:58:20.707613 kubelet[3123]: E0120 01:58:20.707154 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:424ac945776d4646865b6465d767c112,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-trdb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8bc549748-txp25_calico-system(44944462-7130-49ee-b7c5-4cb73dea6058): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 01:58:20.712230 containerd[1643]: time="2026-01-20T01:58:20.709618659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 01:58:21.243422 kubelet[3123]: E0120 01:58:21.243323 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 01:58:21.341792 containerd[1643]: time="2026-01-20T01:58:21.337781305Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:58:21.351641 containerd[1643]: time="2026-01-20T01:58:21.351057307Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 01:58:21.351641 containerd[1643]: time="2026-01-20T01:58:21.351306524Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 01:58:21.355121 kubelet[3123]: E0120 01:58:21.354807 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:58:21.355786 kubelet[3123]: E0120 01:58:21.355057 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:58:21.356407 kubelet[3123]: E0120 01:58:21.356173 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-trdb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8bc549748-txp25_calico-system(44944462-7130-49ee-b7c5-4cb73dea6058): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 01:58:21.365835 kubelet[3123]: E0120 01:58:21.358307 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8bc549748-txp25" podUID="44944462-7130-49ee-b7c5-4cb73dea6058" Jan 20 01:58:21.418655 containerd[1643]: time="2026-01-20T01:58:21.399625100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 01:58:21.631000 audit[8630]: NETFILTER_CFG table=filter:142 family=2 entries=14 op=nft_register_rule pid=8630 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:58:21.631000 audit[8630]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe83470ea0 a2=0 a3=7ffe83470e8c items=0 ppid=3237 pid=8630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:21.631000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:58:21.733393 containerd[1643]: time="2026-01-20T01:58:21.728896377Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:58:21.826015 containerd[1643]: time="2026-01-20T01:58:21.823470684Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 01:58:21.826015 containerd[1643]: time="2026-01-20T01:58:21.823569152Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 01:58:21.840323 kubelet[3123]: E0120 01:58:21.832375 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:58:21.840323 kubelet[3123]: E0120 01:58:21.832451 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:58:21.840323 kubelet[3123]: E0120 01:58:21.832636 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pb2hv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x6f5h_calico-system(eeb09d5e-8a63-4fca-910b-ea49fa1ecf05): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 01:58:21.835000 audit[8630]: NETFILTER_CFG table=nat:143 family=2 entries=56 op=nft_register_chain pid=8630 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:58:21.835000 audit[8630]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffe83470ea0 a2=0 a3=7ffe83470e8c items=0 ppid=3237 pid=8630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:21.835000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:58:21.862065 containerd[1643]: time="2026-01-20T01:58:21.851185433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 01:58:21.875565 kubelet[3123]: E0120 01:58:21.869872 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8bc549748-txp25" podUID="44944462-7130-49ee-b7c5-4cb73dea6058" Jan 20 01:58:22.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.44:22-10.0.0.1:41672 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:58:22.463552 systemd[1]: Started sshd@14-10.0.0.44:22-10.0.0.1:41672.service - OpenSSH per-connection server daemon (10.0.0.1:41672). Jan 20 01:58:22.541087 containerd[1643]: time="2026-01-20T01:58:22.537127177Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:58:22.551817 containerd[1643]: time="2026-01-20T01:58:22.551739489Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 01:58:22.552130 containerd[1643]: time="2026-01-20T01:58:22.552100118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 01:58:22.557161 kubelet[3123]: E0120 01:58:22.554353 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:58:22.557161 kubelet[3123]: E0120 01:58:22.554444 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:58:22.557161 kubelet[3123]: E0120 01:58:22.554639 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pb2hv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x6f5h_calico-system(eeb09d5e-8a63-4fca-910b-ea49fa1ecf05): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 01:58:22.564165 kubelet[3123]: E0120 01:58:22.561737 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:58:23.241000 audit[8637]: NETFILTER_CFG table=filter:144 family=2 entries=14 op=nft_register_rule pid=8637 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:58:23.241000 audit[8637]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcb0f50ad0 a2=0 a3=7ffcb0f50abc items=0 ppid=3237 pid=8637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:23.241000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:58:23.269000 audit[8637]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=8637 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:58:23.269000 audit[8637]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffcb0f50ad0 a2=0 a3=7ffcb0f50abc items=0 ppid=3237 pid=8637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:23.269000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:58:23.291000 audit[8633]: USER_ACCT pid=8633 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:23.295438 sshd[8633]: Accepted publickey for core from 10.0.0.1 port 41672 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 01:58:23.307000 audit[8633]: CRED_ACQ pid=8633 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:23.307000 audit[8633]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffd1fa6440 a2=3 a3=0 items=0 ppid=1 pid=8633 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:23.307000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:58:23.312947 sshd-session[8633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:58:23.420812 systemd-logind[1623]: New session 15 of user core. Jan 20 01:58:23.492007 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 01:58:23.611000 audit[8633]: USER_START pid=8633 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:23.629000 audit[8638]: CRED_ACQ pid=8638 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:24.782553 kubelet[3123]: E0120 01:58:24.782404 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:58:24.829746 containerd[1643]: time="2026-01-20T01:58:24.822798431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 01:58:24.876898 sshd[8638]: Connection closed by 10.0.0.1 port 41672 Jan 20 01:58:24.884000 audit[8633]: USER_END pid=8633 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:24.861038 sshd-session[8633]: pam_unix(sshd:session): session closed for user core Jan 20 01:58:24.921241 kernel: kauditd_printk_skb: 47 callbacks suppressed Jan 20 01:58:24.926096 kernel: audit: type=1106 audit(1768874304.884:848): pid=8633 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:24.968000 audit[8633]: CRED_DISP pid=8633 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:25.016176 kernel: audit: type=1104 audit(1768874304.968:849): pid=8633 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:25.032638 systemd-logind[1623]: Session 15 logged out. Waiting for processes to exit. Jan 20 01:58:25.042374 systemd[1]: sshd@14-10.0.0.44:22-10.0.0.1:41672.service: Deactivated successfully. Jan 20 01:58:25.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.44:22-10.0.0.1:41672 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:58:25.101747 kernel: audit: type=1131 audit(1768874305.053:850): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.44:22-10.0.0.1:41672 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:58:25.114979 containerd[1643]: time="2026-01-20T01:58:25.113567407Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:58:25.118949 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 01:58:25.130081 systemd-logind[1623]: Removed session 15. Jan 20 01:58:25.138297 containerd[1643]: time="2026-01-20T01:58:25.138224507Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 01:58:25.138875 containerd[1643]: time="2026-01-20T01:58:25.138500395Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 01:58:25.142712 kubelet[3123]: E0120 01:58:25.140390 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:58:25.142712 kubelet[3123]: E0120 01:58:25.140494 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:58:25.148532 containerd[1643]: time="2026-01-20T01:58:25.145533736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 01:58:25.152011 kubelet[3123]: E0120 01:58:25.141064 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlmjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-86d7bc7b4f-k5t2j_calico-system(68cbc571-4445-4166-912c-8fdfe252aae2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 01:58:25.153748 kubelet[3123]: E0120 01:58:25.153409 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 01:58:25.303027 containerd[1643]: time="2026-01-20T01:58:25.302607306Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:58:25.318256 containerd[1643]: time="2026-01-20T01:58:25.315562570Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 01:58:25.318256 containerd[1643]: time="2026-01-20T01:58:25.315794855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 01:58:25.318476 kubelet[3123]: E0120 01:58:25.316075 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:58:25.318476 kubelet[3123]: E0120 01:58:25.316180 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:58:25.318476 kubelet[3123]: E0120 01:58:25.316377 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vgfnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-szpvj_calico-system(285811f9-e547-431f-a7b0-90e1226d2f4d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 01:58:25.320554 kubelet[3123]: E0120 01:58:25.319763 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 01:58:25.795236 kubelet[3123]: E0120 01:58:25.792038 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:58:25.808246 kubelet[3123]: E0120 01:58:25.808084 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 01:58:29.926226 systemd[1]: Started sshd@15-10.0.0.44:22-10.0.0.1:36074.service - OpenSSH per-connection server daemon (10.0.0.1:36074). Jan 20 01:58:29.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.44:22-10.0.0.1:36074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:58:29.991777 kernel: audit: type=1130 audit(1768874309.925:851): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.44:22-10.0.0.1:36074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:58:30.438000 audit[8670]: USER_ACCT pid=8670 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:30.443966 sshd[8670]: Accepted publickey for core from 10.0.0.1 port 36074 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 01:58:30.445359 sshd-session[8670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:58:30.519791 kernel: audit: type=1101 audit(1768874310.438:852): pid=8670 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:30.519933 kernel: audit: type=1103 audit(1768874310.443:853): pid=8670 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:30.443000 audit[8670]: CRED_ACQ pid=8670 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:30.507839 systemd-logind[1623]: New session 16 of user core. Jan 20 01:58:30.540607 kernel: audit: type=1006 audit(1768874310.443:854): pid=8670 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 20 01:58:30.559256 kernel: audit: type=1300 audit(1768874310.443:854): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe41e6cd70 a2=3 a3=0 items=0 ppid=1 pid=8670 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:30.443000 audit[8670]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe41e6cd70 a2=3 a3=0 items=0 ppid=1 pid=8670 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:30.561184 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 01:58:30.443000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:58:30.653833 kernel: audit: type=1327 audit(1768874310.443:854): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:58:30.654033 kernel: audit: type=1105 audit(1768874310.608:855): pid=8670 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:30.608000 audit[8670]: USER_START pid=8670 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:30.721429 kernel: audit: type=1103 audit(1768874310.630:856): pid=8673 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:30.630000 audit[8673]: CRED_ACQ pid=8673 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:31.327795 sshd[8673]: Connection closed by 10.0.0.1 port 36074 Jan 20 01:58:31.320943 sshd-session[8670]: pam_unix(sshd:session): session closed for user core Jan 20 01:58:31.339000 audit[8670]: USER_END pid=8670 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:31.360616 systemd[1]: sshd@15-10.0.0.44:22-10.0.0.1:36074.service: Deactivated successfully. Jan 20 01:58:31.400089 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 01:58:31.407452 kernel: audit: type=1106 audit(1768874311.339:857): pid=8670 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:31.407987 kernel: audit: type=1104 audit(1768874311.342:858): pid=8670 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:31.342000 audit[8670]: CRED_DISP pid=8670 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:31.429591 systemd-logind[1623]: Session 16 logged out. Waiting for processes to exit. Jan 20 01:58:31.445012 systemd-logind[1623]: Removed session 16. Jan 20 01:58:31.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.44:22-10.0.0.1:36074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:58:33.826778 containerd[1643]: time="2026-01-20T01:58:33.815275308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:58:33.956138 containerd[1643]: time="2026-01-20T01:58:33.955859060Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:58:33.970076 containerd[1643]: time="2026-01-20T01:58:33.969794396Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:58:33.970076 containerd[1643]: time="2026-01-20T01:58:33.969943142Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 01:58:34.001778 kubelet[3123]: E0120 01:58:34.001601 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:58:34.011003 kubelet[3123]: E0120 01:58:34.009909 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:58:34.011003 kubelet[3123]: E0120 01:58:34.010361 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvpjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74b798b596-r7ptx_calico-apiserver(03f653dd-0210-41e9-9d70-a3905826baa1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:58:34.032589 kubelet[3123]: E0120 01:58:34.011969 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 01:58:34.032960 containerd[1643]: time="2026-01-20T01:58:34.017137233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 01:58:34.170028 containerd[1643]: time="2026-01-20T01:58:34.168827512Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:58:34.201966 containerd[1643]: time="2026-01-20T01:58:34.198574645Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 01:58:34.201966 containerd[1643]: time="2026-01-20T01:58:34.199155147Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 01:58:34.204781 kubelet[3123]: E0120 01:58:34.203822 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:58:34.204781 kubelet[3123]: E0120 01:58:34.203937 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:58:34.214374 kubelet[3123]: E0120 01:58:34.214189 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:424ac945776d4646865b6465d767c112,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-trdb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8bc549748-txp25_calico-system(44944462-7130-49ee-b7c5-4cb73dea6058): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 01:58:34.237594 containerd[1643]: time="2026-01-20T01:58:34.237413029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 01:58:34.353756 containerd[1643]: time="2026-01-20T01:58:34.353483106Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:58:34.362226 containerd[1643]: time="2026-01-20T01:58:34.360967142Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 01:58:34.362226 containerd[1643]: time="2026-01-20T01:58:34.361188446Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 01:58:34.362599 kubelet[3123]: E0120 01:58:34.361611 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:58:34.362599 kubelet[3123]: E0120 01:58:34.361833 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:58:34.362599 kubelet[3123]: E0120 01:58:34.362012 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-trdb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8bc549748-txp25_calico-system(44944462-7130-49ee-b7c5-4cb73dea6058): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 01:58:34.365144 kubelet[3123]: E0120 01:58:34.365085 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8bc549748-txp25" podUID="44944462-7130-49ee-b7c5-4cb73dea6058" Jan 20 01:58:34.539907 containerd[1643]: time="2026-01-20T01:58:34.539324863Z" level=info msg="container event discarded" container=01d5f8908880a1aeb7f16f3cfd3a358a6def4a9cf94a38cc7994297050d81a76 type=CONTAINER_CREATED_EVENT Jan 20 01:58:34.770285 kubelet[3123]: E0120 01:58:34.770036 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:58:35.570141 containerd[1643]: time="2026-01-20T01:58:35.569883908Z" level=info msg="container event discarded" container=01d5f8908880a1aeb7f16f3cfd3a358a6def4a9cf94a38cc7994297050d81a76 type=CONTAINER_STARTED_EVENT Jan 20 01:58:35.815774 kubelet[3123]: E0120 01:58:35.815581 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:58:36.396100 systemd[1]: Started sshd@16-10.0.0.44:22-10.0.0.1:43722.service - OpenSSH per-connection server daemon (10.0.0.1:43722). Jan 20 01:58:36.425000 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 01:58:36.426183 kernel: audit: type=1130 audit(1768874316.393:860): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.44:22-10.0.0.1:43722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:58:36.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.44:22-10.0.0.1:43722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:58:36.750550 sshd[8689]: Accepted publickey for core from 10.0.0.1 port 43722 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 01:58:36.746000 audit[8689]: USER_ACCT pid=8689 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:36.765236 sshd-session[8689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:58:36.817370 kernel: audit: type=1101 audit(1768874316.746:861): pid=8689 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:36.817523 kubelet[3123]: E0120 01:58:36.812324 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 01:58:36.817523 kubelet[3123]: E0120 01:58:36.813759 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 01:58:36.762000 audit[8689]: CRED_ACQ pid=8689 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:36.894651 kernel: audit: type=1103 audit(1768874316.762:862): pid=8689 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:36.894888 kernel: audit: type=1006 audit(1768874316.762:863): pid=8689 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 20 01:58:36.884580 systemd-logind[1623]: New session 17 of user core. Jan 20 01:58:36.762000 audit[8689]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffee367a400 a2=3 a3=0 items=0 ppid=1 pid=8689 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:36.994119 kernel: audit: type=1300 audit(1768874316.762:863): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffee367a400 a2=3 a3=0 items=0 ppid=1 pid=8689 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:36.994337 kernel: audit: type=1327 audit(1768874316.762:863): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:58:36.762000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:58:37.001107 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 01:58:37.057000 audit[8689]: USER_START pid=8689 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:37.112868 kernel: audit: type=1105 audit(1768874317.057:864): pid=8689 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:37.111000 audit[8692]: CRED_ACQ pid=8692 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:37.143616 kernel: audit: type=1103 audit(1768874317.111:865): pid=8692 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:37.569325 sshd[8692]: Connection closed by 10.0.0.1 port 43722 Jan 20 01:58:37.570125 sshd-session[8689]: pam_unix(sshd:session): session closed for user core Jan 20 01:58:37.573000 audit[8689]: USER_END pid=8689 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:37.591255 systemd-logind[1623]: Session 17 logged out. Waiting for processes to exit. Jan 20 01:58:37.627335 kernel: audit: type=1106 audit(1768874317.573:866): pid=8689 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:37.627501 kernel: audit: type=1104 audit(1768874317.573:867): pid=8689 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:37.573000 audit[8689]: CRED_DISP pid=8689 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:37.607262 systemd[1]: sshd@16-10.0.0.44:22-10.0.0.1:43722.service: Deactivated successfully. Jan 20 01:58:37.618232 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 01:58:37.664917 systemd-logind[1623]: Removed session 17. Jan 20 01:58:37.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.44:22-10.0.0.1:43722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:58:37.780821 containerd[1643]: time="2026-01-20T01:58:37.780762499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:58:37.915824 containerd[1643]: time="2026-01-20T01:58:37.908260887Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:58:37.928247 containerd[1643]: time="2026-01-20T01:58:37.928163627Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:58:37.928735 containerd[1643]: time="2026-01-20T01:58:37.928424907Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 01:58:37.929503 kubelet[3123]: E0120 01:58:37.929387 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:58:37.929503 kubelet[3123]: E0120 01:58:37.929467 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:58:37.940598 kubelet[3123]: E0120 01:58:37.930763 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gbt2k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74b798b596-wbvft_calico-apiserver(589f656f-1e0a-4667-bc0d-42908aab3340): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:58:37.943126 kubelet[3123]: E0120 01:58:37.942819 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 01:58:40.772872 kubelet[3123]: E0120 01:58:40.772813 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:58:42.708330 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 01:58:42.708729 kernel: audit: type=1130 audit(1768874322.643:869): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.44:22-10.0.0.1:43746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:58:42.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.44:22-10.0.0.1:43746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:58:42.644254 systemd[1]: Started sshd@17-10.0.0.44:22-10.0.0.1:43746.service - OpenSSH per-connection server daemon (10.0.0.1:43746). Jan 20 01:58:43.107618 sshd[8716]: Accepted publickey for core from 10.0.0.1 port 43746 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 01:58:43.105000 audit[8716]: USER_ACCT pid=8716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:43.176814 kernel: audit: type=1101 audit(1768874323.105:870): pid=8716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:43.212924 sshd-session[8716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:58:43.210000 audit[8716]: CRED_ACQ pid=8716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:43.271352 systemd-logind[1623]: New session 18 of user core. Jan 20 01:58:43.340623 kernel: audit: type=1103 audit(1768874323.210:871): pid=8716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:43.340848 kernel: audit: type=1006 audit(1768874323.210:872): pid=8716 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 20 01:58:43.340889 kernel: audit: type=1300 audit(1768874323.210:872): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe77e12030 a2=3 a3=0 items=0 ppid=1 pid=8716 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:43.210000 audit[8716]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe77e12030 a2=3 a3=0 items=0 ppid=1 pid=8716 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:43.341473 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 01:58:43.210000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:58:43.448931 kernel: audit: type=1327 audit(1768874323.210:872): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:58:43.449068 kernel: audit: type=1105 audit(1768874323.401:873): pid=8716 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:43.401000 audit[8716]: USER_START pid=8716 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:43.557456 kernel: audit: type=1103 audit(1768874323.430:874): pid=8719 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:43.430000 audit[8719]: CRED_ACQ pid=8719 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:44.279647 sshd[8719]: Connection closed by 10.0.0.1 port 43746 Jan 20 01:58:44.278434 sshd-session[8716]: pam_unix(sshd:session): session closed for user core Jan 20 01:58:44.283000 audit[8716]: USER_END pid=8716 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:44.361524 kernel: audit: type=1106 audit(1768874324.283:875): pid=8716 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:44.283000 audit[8716]: CRED_DISP pid=8716 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:44.407977 systemd[1]: sshd@17-10.0.0.44:22-10.0.0.1:43746.service: Deactivated successfully. Jan 20 01:58:44.424929 kernel: audit: type=1104 audit(1768874324.283:876): pid=8716 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:44.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.44:22-10.0.0.1:43746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:58:44.431874 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 01:58:44.445531 systemd-logind[1623]: Session 18 logged out. Waiting for processes to exit. Jan 20 01:58:44.447767 systemd-logind[1623]: Removed session 18. Jan 20 01:58:46.787296 kubelet[3123]: E0120 01:58:46.787118 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 01:58:46.804881 containerd[1643]: time="2026-01-20T01:58:46.804627639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 01:58:47.182577 containerd[1643]: time="2026-01-20T01:58:47.164774855Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:58:47.226760 containerd[1643]: time="2026-01-20T01:58:47.226508371Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 01:58:47.226997 containerd[1643]: time="2026-01-20T01:58:47.226669190Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 01:58:47.236000 kubelet[3123]: E0120 01:58:47.231806 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:58:47.236000 kubelet[3123]: E0120 01:58:47.231888 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:58:47.236000 kubelet[3123]: E0120 01:58:47.232069 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pb2hv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x6f5h_calico-system(eeb09d5e-8a63-4fca-910b-ea49fa1ecf05): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 01:58:47.280276 containerd[1643]: time="2026-01-20T01:58:47.280164239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 01:58:47.389437 containerd[1643]: time="2026-01-20T01:58:47.386765412Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:58:47.398812 containerd[1643]: time="2026-01-20T01:58:47.397349978Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 01:58:47.398812 containerd[1643]: time="2026-01-20T01:58:47.397586572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 01:58:47.411850 kubelet[3123]: E0120 01:58:47.410029 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:58:47.411850 kubelet[3123]: E0120 01:58:47.410099 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:58:47.411850 kubelet[3123]: E0120 01:58:47.410264 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pb2hv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x6f5h_calico-system(eeb09d5e-8a63-4fca-910b-ea49fa1ecf05): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 01:58:47.419652 kubelet[3123]: E0120 01:58:47.414961 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:58:47.813600 containerd[1643]: time="2026-01-20T01:58:47.812778951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 01:58:47.927749 containerd[1643]: time="2026-01-20T01:58:47.926211592Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:58:47.941929 containerd[1643]: time="2026-01-20T01:58:47.941848487Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 01:58:47.942672 containerd[1643]: time="2026-01-20T01:58:47.942568537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 01:58:47.950839 kubelet[3123]: E0120 01:58:47.945156 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:58:47.950839 kubelet[3123]: E0120 01:58:47.945227 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:58:47.950839 kubelet[3123]: E0120 01:58:47.945989 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vgfnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-szpvj_calico-system(285811f9-e547-431f-a7b0-90e1226d2f4d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 01:58:47.950839 kubelet[3123]: E0120 01:58:47.947266 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 01:58:49.361212 systemd[1]: Started sshd@18-10.0.0.44:22-10.0.0.1:37258.service - OpenSSH per-connection server daemon (10.0.0.1:37258). Jan 20 01:58:49.411806 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 01:58:49.412082 kernel: audit: type=1130 audit(1768874329.392:878): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.44:22-10.0.0.1:37258 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:58:49.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.44:22-10.0.0.1:37258 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:58:49.796947 kubelet[3123]: E0120 01:58:49.796338 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8bc549748-txp25" podUID="44944462-7130-49ee-b7c5-4cb73dea6058" Jan 20 01:58:49.808782 sshd[8762]: Accepted publickey for core from 10.0.0.1 port 37258 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 01:58:49.804000 audit[8762]: USER_ACCT pid=8762 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:49.818337 sshd-session[8762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:58:49.889329 kernel: audit: type=1101 audit(1768874329.804:879): pid=8762 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:49.804000 audit[8762]: CRED_ACQ pid=8762 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:49.957300 kernel: audit: type=1103 audit(1768874329.804:880): pid=8762 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:49.916756 systemd-logind[1623]: New session 19 of user core. Jan 20 01:58:49.804000 audit[8762]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe642d8dc0 a2=3 a3=0 items=0 ppid=1 pid=8762 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:50.071998 kernel: audit: type=1006 audit(1768874329.804:881): pid=8762 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 20 01:58:50.072176 kernel: audit: type=1300 audit(1768874329.804:881): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe642d8dc0 a2=3 a3=0 items=0 ppid=1 pid=8762 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:49.804000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:58:50.084460 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 01:58:50.105747 kernel: audit: type=1327 audit(1768874329.804:881): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:58:50.121000 audit[8762]: USER_START pid=8762 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:50.130000 audit[8765]: CRED_ACQ pid=8765 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:50.273432 kernel: audit: type=1105 audit(1768874330.121:882): pid=8762 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:50.273942 kernel: audit: type=1103 audit(1768874330.130:883): pid=8765 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:50.772506 kubelet[3123]: E0120 01:58:50.772134 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 01:58:50.796668 sshd[8765]: Connection closed by 10.0.0.1 port 37258 Jan 20 01:58:50.802048 sshd-session[8762]: pam_unix(sshd:session): session closed for user core Jan 20 01:58:50.811000 audit[8762]: USER_END pid=8762 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:50.835364 systemd[1]: sshd@18-10.0.0.44:22-10.0.0.1:37258.service: Deactivated successfully. Jan 20 01:58:50.900320 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 01:58:50.811000 audit[8762]: CRED_DISP pid=8762 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:51.014208 kernel: audit: type=1106 audit(1768874330.811:884): pid=8762 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:51.014327 kernel: audit: type=1104 audit(1768874330.811:885): pid=8762 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:50.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.44:22-10.0.0.1:37258 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:58:51.013567 systemd-logind[1623]: Session 19 logged out. Waiting for processes to exit. Jan 20 01:58:51.041343 systemd-logind[1623]: Removed session 19. Jan 20 01:58:51.819964 containerd[1643]: time="2026-01-20T01:58:51.815595552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 01:58:52.026633 containerd[1643]: time="2026-01-20T01:58:52.026573667Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:58:52.088558 containerd[1643]: time="2026-01-20T01:58:52.070390975Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 01:58:52.088558 containerd[1643]: time="2026-01-20T01:58:52.070527166Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 01:58:52.090975 kubelet[3123]: E0120 01:58:52.071626 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:58:52.090975 kubelet[3123]: E0120 01:58:52.072075 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:58:52.105208 kubelet[3123]: E0120 01:58:52.094393 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlmjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-86d7bc7b4f-k5t2j_calico-system(68cbc571-4445-4166-912c-8fdfe252aae2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 01:58:52.105208 kubelet[3123]: E0120 01:58:52.096096 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 01:58:55.194292 containerd[1643]: time="2026-01-20T01:58:55.194235772Z" level=info msg="StopPodSandbox for \"034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a\"" Jan 20 01:58:55.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.44:22-10.0.0.1:32774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:58:55.919259 systemd[1]: Started sshd@19-10.0.0.44:22-10.0.0.1:32774.service - OpenSSH per-connection server daemon (10.0.0.1:32774). Jan 20 01:58:55.944572 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 01:58:55.944624 kernel: audit: type=1130 audit(1768874335.918:887): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.44:22-10.0.0.1:32774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:58:56.316662 containerd[1643]: 2026-01-20 01:58:55.749 [WARNING][8788] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" WorkloadEndpoint="localhost-k8s-whisker--6455dcb75d--z7fp4-eth0" Jan 20 01:58:56.316662 containerd[1643]: 2026-01-20 01:58:55.749 [INFO][8788] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" Jan 20 01:58:56.316662 containerd[1643]: 2026-01-20 01:58:55.750 [INFO][8788] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" iface="eth0" netns="" Jan 20 01:58:56.316662 containerd[1643]: 2026-01-20 01:58:55.750 [INFO][8788] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" Jan 20 01:58:56.316662 containerd[1643]: 2026-01-20 01:58:55.750 [INFO][8788] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" Jan 20 01:58:56.316662 containerd[1643]: 2026-01-20 01:58:56.095 [INFO][8796] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" HandleID="k8s-pod-network.034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" Workload="localhost-k8s-whisker--6455dcb75d--z7fp4-eth0" Jan 20 01:58:56.316662 containerd[1643]: 2026-01-20 01:58:56.095 [INFO][8796] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:58:56.316662 containerd[1643]: 2026-01-20 01:58:56.095 [INFO][8796] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:58:56.316662 containerd[1643]: 2026-01-20 01:58:56.176 [WARNING][8796] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" HandleID="k8s-pod-network.034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" Workload="localhost-k8s-whisker--6455dcb75d--z7fp4-eth0" Jan 20 01:58:56.316662 containerd[1643]: 2026-01-20 01:58:56.177 [INFO][8796] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" HandleID="k8s-pod-network.034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" Workload="localhost-k8s-whisker--6455dcb75d--z7fp4-eth0" Jan 20 01:58:56.316662 containerd[1643]: 2026-01-20 01:58:56.231 [INFO][8796] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:58:56.316662 containerd[1643]: 2026-01-20 01:58:56.273 [INFO][8788] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" Jan 20 01:58:56.321461 containerd[1643]: time="2026-01-20T01:58:56.318457728Z" level=info msg="TearDown network for sandbox \"034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a\" successfully" Jan 20 01:58:56.323179 containerd[1643]: time="2026-01-20T01:58:56.321574640Z" level=info msg="StopPodSandbox for \"034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a\" returns successfully" Jan 20 01:58:56.344311 containerd[1643]: time="2026-01-20T01:58:56.332829868Z" level=info msg="RemovePodSandbox for \"034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a\"" Jan 20 01:58:56.344311 containerd[1643]: time="2026-01-20T01:58:56.332897719Z" level=info msg="Forcibly stopping sandbox \"034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a\"" Jan 20 01:58:56.794000 audit[8802]: USER_ACCT pid=8802 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:56.801039 sshd[8802]: Accepted publickey for core from 10.0.0.1 port 32774 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 01:58:56.840423 sshd-session[8802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:58:56.893844 kernel: audit: type=1101 audit(1768874336.794:888): pid=8802 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:56.894023 kernel: audit: type=1103 audit(1768874336.819:889): pid=8802 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:56.819000 audit[8802]: CRED_ACQ pid=8802 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:56.945031 systemd-logind[1623]: New session 20 of user core. Jan 20 01:58:57.031503 kernel: audit: type=1006 audit(1768874336.819:890): pid=8802 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 20 01:58:57.031627 kernel: audit: type=1300 audit(1768874336.819:890): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0db34c90 a2=3 a3=0 items=0 ppid=1 pid=8802 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:56.819000 audit[8802]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0db34c90 a2=3 a3=0 items=0 ppid=1 pid=8802 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:57.121841 kernel: audit: type=1327 audit(1768874336.819:890): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:58:56.819000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:58:57.160649 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 01:58:57.289000 audit[8802]: USER_START pid=8802 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:57.382916 kernel: audit: type=1105 audit(1768874337.289:891): pid=8802 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:57.383448 kernel: audit: type=1103 audit(1768874337.325:892): pid=8831 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:57.325000 audit[8831]: CRED_ACQ pid=8831 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:57.850250 kubelet[3123]: E0120 01:58:57.849498 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 01:58:58.266376 containerd[1643]: 2026-01-20 01:58:57.235 [WARNING][8818] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" WorkloadEndpoint="localhost-k8s-whisker--6455dcb75d--z7fp4-eth0" Jan 20 01:58:58.266376 containerd[1643]: 2026-01-20 01:58:57.236 [INFO][8818] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" Jan 20 01:58:58.266376 containerd[1643]: 2026-01-20 01:58:57.236 [INFO][8818] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" iface="eth0" netns="" Jan 20 01:58:58.266376 containerd[1643]: 2026-01-20 01:58:57.236 [INFO][8818] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" Jan 20 01:58:58.266376 containerd[1643]: 2026-01-20 01:58:57.236 [INFO][8818] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" Jan 20 01:58:58.266376 containerd[1643]: 2026-01-20 01:58:57.951 [INFO][8826] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" HandleID="k8s-pod-network.034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" Workload="localhost-k8s-whisker--6455dcb75d--z7fp4-eth0" Jan 20 01:58:58.266376 containerd[1643]: 2026-01-20 01:58:57.988 [INFO][8826] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:58:58.266376 containerd[1643]: 2026-01-20 01:58:57.992 [INFO][8826] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:58:58.266376 containerd[1643]: 2026-01-20 01:58:58.177 [WARNING][8826] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" HandleID="k8s-pod-network.034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" Workload="localhost-k8s-whisker--6455dcb75d--z7fp4-eth0" Jan 20 01:58:58.266376 containerd[1643]: 2026-01-20 01:58:58.177 [INFO][8826] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" HandleID="k8s-pod-network.034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" Workload="localhost-k8s-whisker--6455dcb75d--z7fp4-eth0" Jan 20 01:58:58.266376 containerd[1643]: 2026-01-20 01:58:58.219 [INFO][8826] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:58:58.266376 containerd[1643]: 2026-01-20 01:58:58.235 [INFO][8818] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a" Jan 20 01:58:58.278438 containerd[1643]: time="2026-01-20T01:58:58.267307324Z" level=info msg="TearDown network for sandbox \"034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a\" successfully" Jan 20 01:58:58.400757 containerd[1643]: time="2026-01-20T01:58:58.400514711Z" level=info msg="Ensure that sandbox 034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a in task-service has been cleanup successfully" Jan 20 01:58:58.473207 containerd[1643]: time="2026-01-20T01:58:58.473016024Z" level=info msg="RemovePodSandbox \"034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a\" returns successfully" Jan 20 01:58:58.510632 containerd[1643]: time="2026-01-20T01:58:58.510445945Z" level=info msg="container event discarded" container=01d5f8908880a1aeb7f16f3cfd3a358a6def4a9cf94a38cc7994297050d81a76 type=CONTAINER_STOPPED_EVENT Jan 20 01:58:58.563219 sshd[8831]: Connection closed by 10.0.0.1 port 32774 Jan 20 01:58:58.567790 sshd-session[8802]: pam_unix(sshd:session): session closed for user core Jan 20 01:58:58.601000 audit[8802]: USER_END pid=8802 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:58.694213 kernel: audit: type=1106 audit(1768874338.601:893): pid=8802 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:58.694422 kernel: audit: type=1104 audit(1768874338.608:894): pid=8802 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:58.608000 audit[8802]: CRED_DISP pid=8802 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:58:58.652311 systemd[1]: sshd@19-10.0.0.44:22-10.0.0.1:32774.service: Deactivated successfully. Jan 20 01:58:58.705187 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 01:58:58.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.44:22-10.0.0.1:32774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:58:58.732659 systemd-logind[1623]: Session 20 logged out. Waiting for processes to exit. Jan 20 01:58:58.745610 systemd-logind[1623]: Removed session 20. Jan 20 01:58:59.807821 kubelet[3123]: E0120 01:58:59.802649 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:59:01.820977 containerd[1643]: time="2026-01-20T01:59:01.812892152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 01:59:02.013653 containerd[1643]: time="2026-01-20T01:59:02.012893188Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:59:02.029409 containerd[1643]: time="2026-01-20T01:59:02.029155437Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 01:59:02.037153 containerd[1643]: time="2026-01-20T01:59:02.030898655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 01:59:02.041611 kubelet[3123]: E0120 01:59:02.041385 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:59:02.041611 kubelet[3123]: E0120 01:59:02.041518 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:59:02.042409 kubelet[3123]: E0120 01:59:02.041772 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:424ac945776d4646865b6465d767c112,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-trdb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8bc549748-txp25_calico-system(44944462-7130-49ee-b7c5-4cb73dea6058): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 01:59:02.059445 containerd[1643]: time="2026-01-20T01:59:02.059193142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 01:59:02.168153 containerd[1643]: time="2026-01-20T01:59:02.164832243Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:59:02.192398 containerd[1643]: time="2026-01-20T01:59:02.189475838Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 01:59:02.192398 containerd[1643]: time="2026-01-20T01:59:02.189765684Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 01:59:02.196738 kubelet[3123]: E0120 01:59:02.191671 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:59:02.196738 kubelet[3123]: E0120 01:59:02.195794 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:59:02.196738 kubelet[3123]: E0120 01:59:02.195990 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-trdb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8bc549748-txp25_calico-system(44944462-7130-49ee-b7c5-4cb73dea6058): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 01:59:02.198160 kubelet[3123]: E0120 01:59:02.198112 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8bc549748-txp25" podUID="44944462-7130-49ee-b7c5-4cb73dea6058" Jan 20 01:59:03.659311 systemd[1]: Started sshd@20-10.0.0.44:22-10.0.0.1:32796.service - OpenSSH per-connection server daemon (10.0.0.1:32796). Jan 20 01:59:03.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.44:22-10.0.0.1:32796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:59:03.695439 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 01:59:03.695616 kernel: audit: type=1130 audit(1768874343.663:896): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.44:22-10.0.0.1:32796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:59:03.805267 kubelet[3123]: E0120 01:59:03.804973 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 01:59:04.184000 audit[8856]: USER_ACCT pid=8856 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:04.203642 sshd[8856]: Accepted publickey for core from 10.0.0.1 port 32796 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 01:59:04.212271 sshd-session[8856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:59:04.256792 kernel: audit: type=1101 audit(1768874344.184:897): pid=8856 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:04.203000 audit[8856]: CRED_ACQ pid=8856 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:04.282101 systemd-logind[1623]: New session 21 of user core. Jan 20 01:59:04.337777 kernel: audit: type=1103 audit(1768874344.203:898): pid=8856 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:04.337943 kernel: audit: type=1006 audit(1768874344.203:899): pid=8856 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 20 01:59:04.374072 kernel: audit: type=1300 audit(1768874344.203:899): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffdaf24390 a2=3 a3=0 items=0 ppid=1 pid=8856 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:04.203000 audit[8856]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffdaf24390 a2=3 a3=0 items=0 ppid=1 pid=8856 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:04.446792 kernel: audit: type=1327 audit(1768874344.203:899): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:59:04.203000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:59:04.504401 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 01:59:04.541000 audit[8856]: USER_START pid=8856 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:04.642573 kernel: audit: type=1105 audit(1768874344.541:900): pid=8856 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:04.558000 audit[8859]: CRED_ACQ pid=8859 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:04.779573 kernel: audit: type=1103 audit(1768874344.558:901): pid=8859 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:05.646757 sshd[8859]: Connection closed by 10.0.0.1 port 32796 Jan 20 01:59:05.647956 sshd-session[8856]: pam_unix(sshd:session): session closed for user core Jan 20 01:59:05.764975 kernel: audit: type=1106 audit(1768874345.643:902): pid=8856 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:05.643000 audit[8856]: USER_END pid=8856 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:05.698916 systemd[1]: sshd@20-10.0.0.44:22-10.0.0.1:32796.service: Deactivated successfully. Jan 20 01:59:05.759263 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 01:59:05.643000 audit[8856]: CRED_DISP pid=8856 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:05.795867 systemd-logind[1623]: Session 21 logged out. Waiting for processes to exit. Jan 20 01:59:05.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.44:22-10.0.0.1:32796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:59:05.845087 kernel: audit: type=1104 audit(1768874345.643:903): pid=8856 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:05.862215 systemd-logind[1623]: Removed session 21. Jan 20 01:59:05.943223 kubelet[3123]: E0120 01:59:05.919880 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 01:59:07.811982 kubelet[3123]: E0120 01:59:07.811906 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 01:59:08.815789 kubelet[3123]: E0120 01:59:08.800462 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:59:08.815789 kubelet[3123]: E0120 01:59:08.801397 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 01:59:10.738830 systemd[1]: Started sshd@21-10.0.0.44:22-10.0.0.1:56896.service - OpenSSH per-connection server daemon (10.0.0.1:56896). Jan 20 01:59:10.753035 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 01:59:10.753168 kernel: audit: type=1130 audit(1768874350.743:905): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.44:22-10.0.0.1:56896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:59:10.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.44:22-10.0.0.1:56896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:59:11.200000 audit[8875]: USER_ACCT pid=8875 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:11.213199 sshd[8875]: Accepted publickey for core from 10.0.0.1 port 56896 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 01:59:11.226451 sshd-session[8875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:59:11.222000 audit[8875]: CRED_ACQ pid=8875 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:11.287527 systemd-logind[1623]: New session 22 of user core. Jan 20 01:59:11.341116 kernel: audit: type=1101 audit(1768874351.200:906): pid=8875 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:11.341331 kernel: audit: type=1103 audit(1768874351.222:907): pid=8875 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:11.341371 kernel: audit: type=1006 audit(1768874351.222:908): pid=8875 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 20 01:59:11.355175 kernel: audit: type=1300 audit(1768874351.222:908): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc6ae1d30 a2=3 a3=0 items=0 ppid=1 pid=8875 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:11.222000 audit[8875]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc6ae1d30 a2=3 a3=0 items=0 ppid=1 pid=8875 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:11.222000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:59:11.434095 kernel: audit: type=1327 audit(1768874351.222:908): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:59:11.450037 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 01:59:11.522000 audit[8875]: USER_START pid=8875 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:11.611024 kernel: audit: type=1105 audit(1768874351.522:909): pid=8875 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:11.551000 audit[8878]: CRED_ACQ pid=8878 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:11.633788 kernel: audit: type=1103 audit(1768874351.551:910): pid=8878 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:11.829979 kubelet[3123]: E0120 01:59:11.829800 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:59:12.265652 sshd[8878]: Connection closed by 10.0.0.1 port 56896 Jan 20 01:59:12.268061 sshd-session[8875]: pam_unix(sshd:session): session closed for user core Jan 20 01:59:12.304000 audit[8875]: USER_END pid=8875 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:12.326905 systemd[1]: sshd@21-10.0.0.44:22-10.0.0.1:56896.service: Deactivated successfully. Jan 20 01:59:12.386486 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 01:59:12.392071 kernel: audit: type=1106 audit(1768874352.304:911): pid=8875 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:12.392173 kernel: audit: type=1104 audit(1768874352.304:912): pid=8875 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:12.304000 audit[8875]: CRED_DISP pid=8875 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:12.412627 systemd-logind[1623]: Session 22 logged out. Waiting for processes to exit. Jan 20 01:59:12.442647 systemd-logind[1623]: Removed session 22. Jan 20 01:59:12.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.44:22-10.0.0.1:56896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:59:12.832798 kubelet[3123]: E0120 01:59:12.832530 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8bc549748-txp25" podUID="44944462-7130-49ee-b7c5-4cb73dea6058" Jan 20 01:59:17.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.44:22-10.0.0.1:34872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:59:17.368529 systemd[1]: Started sshd@22-10.0.0.44:22-10.0.0.1:34872.service - OpenSSH per-connection server daemon (10.0.0.1:34872). Jan 20 01:59:17.483514 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 01:59:17.484164 kernel: audit: type=1130 audit(1768874357.365:914): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.44:22-10.0.0.1:34872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:59:18.104000 audit[8893]: USER_ACCT pid=8893 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:18.137792 sshd-session[8893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:59:18.146376 sshd[8893]: Accepted publickey for core from 10.0.0.1 port 34872 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 01:59:18.198441 kernel: audit: type=1101 audit(1768874358.104:915): pid=8893 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:18.130000 audit[8893]: CRED_ACQ pid=8893 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:18.306979 systemd-logind[1623]: New session 23 of user core. Jan 20 01:59:18.314661 kernel: audit: type=1103 audit(1768874358.130:916): pid=8893 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:18.360794 kernel: audit: type=1006 audit(1768874358.130:917): pid=8893 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 20 01:59:18.362669 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 01:59:18.130000 audit[8893]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe2cbef610 a2=3 a3=0 items=0 ppid=1 pid=8893 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:18.486186 kernel: audit: type=1300 audit(1768874358.130:917): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe2cbef610 a2=3 a3=0 items=0 ppid=1 pid=8893 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:18.492796 kernel: audit: type=1327 audit(1768874358.130:917): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:59:18.130000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:59:18.541000 audit[8893]: USER_START pid=8893 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:18.712506 kernel: audit: type=1105 audit(1768874358.541:918): pid=8893 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:18.712758 kernel: audit: type=1103 audit(1768874358.588:919): pid=8921 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:18.588000 audit[8921]: CRED_ACQ pid=8921 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:20.179814 kubelet[3123]: E0120 01:59:20.179351 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 01:59:20.234618 sshd[8921]: Connection closed by 10.0.0.1 port 34872 Jan 20 01:59:20.253432 sshd-session[8893]: pam_unix(sshd:session): session closed for user core Jan 20 01:59:20.259000 audit[8893]: USER_END pid=8893 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:20.367952 kernel: audit: type=1106 audit(1768874360.259:920): pid=8893 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:20.289845 systemd[1]: sshd@22-10.0.0.44:22-10.0.0.1:34872.service: Deactivated successfully. Jan 20 01:59:20.315013 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 01:59:20.357505 systemd-logind[1623]: Session 23 logged out. Waiting for processes to exit. Jan 20 01:59:20.370654 systemd-logind[1623]: Removed session 23. Jan 20 01:59:20.265000 audit[8893]: CRED_DISP pid=8893 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:20.466767 kernel: audit: type=1104 audit(1768874360.265:921): pid=8893 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:20.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.44:22-10.0.0.1:34872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:59:20.817828 containerd[1643]: time="2026-01-20T01:59:20.816049652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:59:21.090476 containerd[1643]: time="2026-01-20T01:59:21.088049777Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:59:21.109901 containerd[1643]: time="2026-01-20T01:59:21.109819202Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:59:21.110243 containerd[1643]: time="2026-01-20T01:59:21.110218048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 01:59:21.110571 kubelet[3123]: E0120 01:59:21.110466 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:59:21.111362 kubelet[3123]: E0120 01:59:21.111331 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:59:21.112143 kubelet[3123]: E0120 01:59:21.112079 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gbt2k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74b798b596-wbvft_calico-apiserver(589f656f-1e0a-4667-bc0d-42908aab3340): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:59:21.131658 kubelet[3123]: E0120 01:59:21.128207 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 01:59:21.812152 kubelet[3123]: E0120 01:59:21.810107 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 01:59:22.775358 containerd[1643]: time="2026-01-20T01:59:22.771065427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:59:22.888049 containerd[1643]: time="2026-01-20T01:59:22.886766932Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:59:22.905260 containerd[1643]: time="2026-01-20T01:59:22.898564782Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:59:22.905491 containerd[1643]: time="2026-01-20T01:59:22.901429691Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 01:59:22.905602 kubelet[3123]: E0120 01:59:22.905528 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:59:22.909557 kubelet[3123]: E0120 01:59:22.905605 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:59:22.921500 kubelet[3123]: E0120 01:59:22.912020 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvpjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74b798b596-r7ptx_calico-apiserver(03f653dd-0210-41e9-9d70-a3905826baa1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:59:22.921500 kubelet[3123]: E0120 01:59:22.919049 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 01:59:24.799801 kubelet[3123]: E0120 01:59:24.796355 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:59:25.357646 systemd[1]: Started sshd@23-10.0.0.44:22-10.0.0.1:49276.service - OpenSSH per-connection server daemon (10.0.0.1:49276). Jan 20 01:59:25.418151 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 01:59:25.418229 kernel: audit: type=1130 audit(1768874365.358:923): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.44:22-10.0.0.1:49276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:59:25.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.44:22-10.0.0.1:49276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:59:25.787780 kubelet[3123]: E0120 01:59:25.778671 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:59:26.390000 audit[8937]: USER_ACCT pid=8937 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:26.400550 sshd-session[8937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:59:26.419488 sshd[8937]: Accepted publickey for core from 10.0.0.1 port 49276 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 01:59:26.426001 kernel: audit: type=1101 audit(1768874366.390:924): pid=8937 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:26.494762 kernel: audit: type=1103 audit(1768874366.398:925): pid=8937 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:26.398000 audit[8937]: CRED_ACQ pid=8937 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:26.539315 systemd-logind[1623]: New session 24 of user core. Jan 20 01:59:26.398000 audit[8937]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe2c1bc6a0 a2=3 a3=0 items=0 ppid=1 pid=8937 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:26.631351 kernel: audit: type=1006 audit(1768874366.398:926): pid=8937 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 20 01:59:26.631528 kernel: audit: type=1300 audit(1768874366.398:926): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe2c1bc6a0 a2=3 a3=0 items=0 ppid=1 pid=8937 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:26.398000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:59:26.658472 kernel: audit: type=1327 audit(1768874366.398:926): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:59:26.658570 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 01:59:26.705000 audit[8937]: USER_START pid=8937 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:26.824046 kernel: audit: type=1105 audit(1768874366.705:927): pid=8937 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:26.824221 kernel: audit: type=1103 audit(1768874366.754:928): pid=8940 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:26.754000 audit[8940]: CRED_ACQ pid=8940 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:26.824403 kubelet[3123]: E0120 01:59:26.804522 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:59:27.844639 kubelet[3123]: E0120 01:59:27.844481 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8bc549748-txp25" podUID="44944462-7130-49ee-b7c5-4cb73dea6058" Jan 20 01:59:28.370593 sshd[8940]: Connection closed by 10.0.0.1 port 49276 Jan 20 01:59:28.364210 sshd-session[8937]: pam_unix(sshd:session): session closed for user core Jan 20 01:59:28.412000 audit[8937]: USER_END pid=8937 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:28.444108 systemd[1]: sshd@23-10.0.0.44:22-10.0.0.1:49276.service: Deactivated successfully. Jan 20 01:59:28.476654 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 01:59:28.519459 kernel: audit: type=1106 audit(1768874368.412:929): pid=8937 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:28.519633 kernel: audit: type=1104 audit(1768874368.412:930): pid=8937 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:28.412000 audit[8937]: CRED_DISP pid=8937 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:28.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.44:22-10.0.0.1:49276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:59:28.595952 systemd-logind[1623]: Session 24 logged out. Waiting for processes to exit. Jan 20 01:59:28.624981 systemd-logind[1623]: Removed session 24. Jan 20 01:59:31.785271 kubelet[3123]: E0120 01:59:31.784433 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:59:32.818629 containerd[1643]: time="2026-01-20T01:59:32.813466092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 01:59:32.958560 containerd[1643]: time="2026-01-20T01:59:32.955227699Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:59:32.970415 containerd[1643]: time="2026-01-20T01:59:32.967649435Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 01:59:32.970415 containerd[1643]: time="2026-01-20T01:59:32.967978518Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 01:59:32.973049 kubelet[3123]: E0120 01:59:32.968187 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:59:32.973049 kubelet[3123]: E0120 01:59:32.968257 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:59:32.973049 kubelet[3123]: E0120 01:59:32.968511 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vgfnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-szpvj_calico-system(285811f9-e547-431f-a7b0-90e1226d2f4d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 01:59:32.973049 kubelet[3123]: E0120 01:59:32.970067 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 01:59:33.426168 systemd[1]: Started sshd@24-10.0.0.44:22-10.0.0.1:49302.service - OpenSSH per-connection server daemon (10.0.0.1:49302). Jan 20 01:59:33.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.44:22-10.0.0.1:49302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:59:33.438406 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 01:59:33.438545 kernel: audit: type=1130 audit(1768874373.426:932): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.44:22-10.0.0.1:49302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:59:33.819972 kubelet[3123]: E0120 01:59:33.800127 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 01:59:33.918000 audit[8956]: USER_ACCT pid=8956 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:33.923220 sshd[8956]: Accepted publickey for core from 10.0.0.1 port 49302 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 01:59:33.943244 sshd-session[8956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:59:33.922000 audit[8956]: CRED_ACQ pid=8956 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:34.016056 kernel: audit: type=1101 audit(1768874373.918:933): pid=8956 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:34.016195 kernel: audit: type=1103 audit(1768874373.922:934): pid=8956 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:34.016251 kernel: audit: type=1006 audit(1768874373.922:935): pid=8956 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 20 01:59:34.037857 kernel: audit: type=1300 audit(1768874373.922:935): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff13b2fd90 a2=3 a3=0 items=0 ppid=1 pid=8956 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:33.922000 audit[8956]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff13b2fd90 a2=3 a3=0 items=0 ppid=1 pid=8956 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:34.055142 systemd-logind[1623]: New session 25 of user core. Jan 20 01:59:33.922000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:59:34.104106 kernel: audit: type=1327 audit(1768874373.922:935): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:59:34.112991 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 20 01:59:34.138000 audit[8956]: USER_START pid=8956 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:34.215802 kernel: audit: type=1105 audit(1768874374.138:936): pid=8956 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:34.215964 kernel: audit: type=1103 audit(1768874374.146:937): pid=8959 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:34.146000 audit[8959]: CRED_ACQ pid=8959 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:34.822469 containerd[1643]: time="2026-01-20T01:59:34.810578883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 01:59:35.033519 sshd[8959]: Connection closed by 10.0.0.1 port 49302 Jan 20 01:59:35.065883 sshd-session[8956]: pam_unix(sshd:session): session closed for user core Jan 20 01:59:35.221000 audit[8956]: USER_END pid=8956 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:35.269062 systemd[1]: sshd@24-10.0.0.44:22-10.0.0.1:49302.service: Deactivated successfully. Jan 20 01:59:35.291642 kernel: audit: type=1106 audit(1768874375.221:938): pid=8956 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:35.291961 kernel: audit: type=1104 audit(1768874375.224:939): pid=8956 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:35.224000 audit[8956]: CRED_DISP pid=8956 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:35.317012 systemd[1]: session-25.scope: Deactivated successfully. Jan 20 01:59:35.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.44:22-10.0.0.1:49302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:59:35.339120 systemd-logind[1623]: Session 25 logged out. Waiting for processes to exit. Jan 20 01:59:35.363876 systemd-logind[1623]: Removed session 25. Jan 20 01:59:35.416174 containerd[1643]: time="2026-01-20T01:59:35.411991017Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:59:35.431773 containerd[1643]: time="2026-01-20T01:59:35.429859182Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 01:59:35.431773 containerd[1643]: time="2026-01-20T01:59:35.429995213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 01:59:35.431991 kubelet[3123]: E0120 01:59:35.430662 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:59:35.431991 kubelet[3123]: E0120 01:59:35.430816 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:59:35.431991 kubelet[3123]: E0120 01:59:35.430989 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlmjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-86d7bc7b4f-k5t2j_calico-system(68cbc571-4445-4166-912c-8fdfe252aae2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 01:59:35.435609 kubelet[3123]: E0120 01:59:35.435173 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 01:59:35.772650 kubelet[3123]: E0120 01:59:35.771946 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:59:36.774222 kubelet[3123]: E0120 01:59:36.774155 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 01:59:38.794798 containerd[1643]: time="2026-01-20T01:59:38.794613492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 01:59:39.020060 containerd[1643]: time="2026-01-20T01:59:39.019538824Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:59:39.026762 containerd[1643]: time="2026-01-20T01:59:39.026350294Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 01:59:39.026762 containerd[1643]: time="2026-01-20T01:59:39.026483750Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 01:59:39.028826 kubelet[3123]: E0120 01:59:39.028182 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:59:39.028826 kubelet[3123]: E0120 01:59:39.028263 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:59:39.028826 kubelet[3123]: E0120 01:59:39.028441 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pb2hv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x6f5h_calico-system(eeb09d5e-8a63-4fca-910b-ea49fa1ecf05): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 01:59:39.040600 containerd[1643]: time="2026-01-20T01:59:39.040212147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 01:59:39.126194 containerd[1643]: time="2026-01-20T01:59:39.125586765Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:59:39.136915 containerd[1643]: time="2026-01-20T01:59:39.136194342Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 01:59:39.136915 containerd[1643]: time="2026-01-20T01:59:39.136370992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 01:59:39.137206 kubelet[3123]: E0120 01:59:39.136589 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:59:39.137206 kubelet[3123]: E0120 01:59:39.136655 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:59:39.141817 kubelet[3123]: E0120 01:59:39.140459 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pb2hv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x6f5h_calico-system(eeb09d5e-8a63-4fca-910b-ea49fa1ecf05): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 01:59:39.149834 kubelet[3123]: E0120 01:59:39.143211 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:59:40.152903 systemd[1]: Started sshd@25-10.0.0.44:22-10.0.0.1:47730.service - OpenSSH per-connection server daemon (10.0.0.1:47730). Jan 20 01:59:40.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.44:22-10.0.0.1:47730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:59:40.225625 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 01:59:40.230305 kernel: audit: type=1130 audit(1768874380.152:941): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.44:22-10.0.0.1:47730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:59:40.672000 audit[8981]: USER_ACCT pid=8981 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:40.732629 sshd-session[8981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:59:40.745770 sshd[8981]: Accepted publickey for core from 10.0.0.1 port 47730 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 01:59:40.809644 kernel: audit: type=1101 audit(1768874380.672:942): pid=8981 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:40.810216 kernel: audit: type=1103 audit(1768874380.726:943): pid=8981 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:40.726000 audit[8981]: CRED_ACQ pid=8981 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:40.900297 kernel: audit: type=1006 audit(1768874380.730:944): pid=8981 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 20 01:59:40.827935 systemd-logind[1623]: New session 26 of user core. Jan 20 01:59:41.044380 kernel: audit: type=1300 audit(1768874380.730:944): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff1500a4a0 a2=3 a3=0 items=0 ppid=1 pid=8981 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:41.044617 kernel: audit: type=1327 audit(1768874380.730:944): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:59:40.730000 audit[8981]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff1500a4a0 a2=3 a3=0 items=0 ppid=1 pid=8981 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:40.730000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:59:41.045647 kubelet[3123]: E0120 01:59:40.816665 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8bc549748-txp25" podUID="44944462-7130-49ee-b7c5-4cb73dea6058" Jan 20 01:59:40.999084 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 20 01:59:41.153000 audit[8981]: USER_START pid=8981 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:41.253257 kernel: audit: type=1105 audit(1768874381.153:945): pid=8981 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:41.221000 audit[8984]: CRED_ACQ pid=8984 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:41.342025 kernel: audit: type=1103 audit(1768874381.221:946): pid=8984 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:42.879219 sshd[8984]: Connection closed by 10.0.0.1 port 47730 Jan 20 01:59:42.911144 sshd-session[8981]: pam_unix(sshd:session): session closed for user core Jan 20 01:59:42.932000 audit[8981]: USER_END pid=8981 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:42.932000 audit[8981]: CRED_DISP pid=8981 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:42.974458 systemd-logind[1623]: Session 26 logged out. Waiting for processes to exit. Jan 20 01:59:42.981166 systemd[1]: sshd@25-10.0.0.44:22-10.0.0.1:47730.service: Deactivated successfully. Jan 20 01:59:43.007644 kernel: audit: type=1106 audit(1768874382.932:947): pid=8981 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:43.008346 kernel: audit: type=1104 audit(1768874382.932:948): pid=8981 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:43.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.44:22-10.0.0.1:47730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:59:43.011005 systemd[1]: session-26.scope: Deactivated successfully. Jan 20 01:59:43.023004 systemd-logind[1623]: Removed session 26. Jan 20 01:59:43.814774 kubelet[3123]: E0120 01:59:43.790555 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 01:59:45.805595 kubelet[3123]: E0120 01:59:45.796028 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:59:46.866444 kubelet[3123]: E0120 01:59:46.860084 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 01:59:47.888581 kubelet[3123]: E0120 01:59:47.888509 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 01:59:48.131577 systemd[1]: Started sshd@26-10.0.0.44:22-10.0.0.1:45044.service - OpenSSH per-connection server daemon (10.0.0.1:45044). Jan 20 01:59:48.241825 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 01:59:48.242001 kernel: audit: type=1130 audit(1768874388.145:950): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.44:22-10.0.0.1:45044 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:59:48.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.44:22-10.0.0.1:45044 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:59:48.791556 kubelet[3123]: E0120 01:59:48.788961 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 01:59:49.166000 audit[9048]: USER_ACCT pid=9048 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:49.174894 sshd[9048]: Accepted publickey for core from 10.0.0.1 port 45044 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 01:59:49.235611 sshd-session[9048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:59:49.313556 kernel: audit: type=1101 audit(1768874389.166:951): pid=9048 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:49.322001 kernel: audit: type=1103 audit(1768874389.205:952): pid=9048 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:49.205000 audit[9048]: CRED_ACQ pid=9048 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:49.349575 systemd-logind[1623]: New session 27 of user core. Jan 20 01:59:49.421855 kernel: audit: type=1006 audit(1768874389.205:953): pid=9048 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 20 01:59:49.438502 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 20 01:59:49.205000 audit[9048]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd810239e0 a2=3 a3=0 items=0 ppid=1 pid=9048 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:49.205000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:59:49.568591 kernel: audit: type=1300 audit(1768874389.205:953): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd810239e0 a2=3 a3=0 items=0 ppid=1 pid=9048 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:49.568806 kernel: audit: type=1327 audit(1768874389.205:953): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:59:49.568895 kernel: audit: type=1105 audit(1768874389.556:954): pid=9048 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:49.556000 audit[9048]: USER_START pid=9048 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:49.604000 audit[9051]: CRED_ACQ pid=9051 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:49.747769 kernel: audit: type=1103 audit(1768874389.604:955): pid=9051 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:50.079239 kubelet[3123]: E0120 01:59:50.079168 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 01:59:51.176382 sshd[9051]: Connection closed by 10.0.0.1 port 45044 Jan 20 01:59:51.201969 sshd-session[9048]: pam_unix(sshd:session): session closed for user core Jan 20 01:59:51.242465 systemd[1]: Started sshd@27-10.0.0.44:22-10.0.0.1:45068.service - OpenSSH per-connection server daemon (10.0.0.1:45068). Jan 20 01:59:51.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.44:22-10.0.0.1:45068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:59:51.262883 kernel: audit: type=1130 audit(1768874391.240:956): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.44:22-10.0.0.1:45068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:59:51.309000 audit[9048]: USER_END pid=9048 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:51.342472 kernel: audit: type=1106 audit(1768874391.309:957): pid=9048 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:51.316000 audit[9048]: CRED_DISP pid=9048 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:51.369086 systemd[1]: sshd@26-10.0.0.44:22-10.0.0.1:45044.service: Deactivated successfully. Jan 20 01:59:51.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.44:22-10.0.0.1:45044 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:59:51.412424 systemd[1]: session-27.scope: Deactivated successfully. Jan 20 01:59:51.447950 systemd-logind[1623]: Session 27 logged out. Waiting for processes to exit. Jan 20 01:59:51.459483 systemd-logind[1623]: Removed session 27. Jan 20 01:59:51.896000 audit[9067]: USER_ACCT pid=9067 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:51.902089 sshd[9067]: Accepted publickey for core from 10.0.0.1 port 45068 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 01:59:51.924148 sshd-session[9067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:59:51.920000 audit[9067]: CRED_ACQ pid=9067 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:51.920000 audit[9067]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca0c8eb10 a2=3 a3=0 items=0 ppid=1 pid=9067 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:51.920000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:59:51.986787 systemd-logind[1623]: New session 28 of user core. Jan 20 01:59:52.050512 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 20 01:59:52.114000 audit[9067]: USER_START pid=9067 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:52.154000 audit[9073]: CRED_ACQ pid=9073 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:54.214946 sshd[9073]: Connection closed by 10.0.0.1 port 45068 Jan 20 01:59:54.215447 sshd-session[9067]: pam_unix(sshd:session): session closed for user core Jan 20 01:59:54.240000 audit[9067]: USER_END pid=9067 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:54.312898 kernel: kauditd_printk_skb: 9 callbacks suppressed Jan 20 01:59:54.313074 kernel: audit: type=1106 audit(1768874394.240:965): pid=9067 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:54.288000 audit[9067]: CRED_DISP pid=9067 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:54.340245 kernel: audit: type=1104 audit(1768874394.288:966): pid=9067 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:54.365447 systemd[1]: sshd@27-10.0.0.44:22-10.0.0.1:45068.service: Deactivated successfully. Jan 20 01:59:54.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.44:22-10.0.0.1:45068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:59:54.400859 systemd[1]: session-28.scope: Deactivated successfully. Jan 20 01:59:54.407064 systemd-logind[1623]: Session 28 logged out. Waiting for processes to exit. Jan 20 01:59:54.559112 kernel: audit: type=1131 audit(1768874394.396:967): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.44:22-10.0.0.1:45068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:59:54.559188 kernel: audit: type=1130 audit(1768874394.416:968): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.44:22-10.0.0.1:45076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:59:54.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.44:22-10.0.0.1:45076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:59:54.412827 systemd-logind[1623]: Removed session 28. Jan 20 01:59:54.420220 systemd[1]: Started sshd@28-10.0.0.44:22-10.0.0.1:45076.service - OpenSSH per-connection server daemon (10.0.0.1:45076). Jan 20 01:59:54.899978 sshd[9087]: Accepted publickey for core from 10.0.0.1 port 45076 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 01:59:54.898000 audit[9087]: USER_ACCT pid=9087 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:54.910968 sshd-session[9087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:59:54.963903 kernel: audit: type=1101 audit(1768874394.898:969): pid=9087 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:54.906000 audit[9087]: CRED_ACQ pid=9087 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:55.002934 systemd-logind[1623]: New session 29 of user core. Jan 20 01:59:55.026050 kernel: audit: type=1103 audit(1768874394.906:970): pid=9087 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:55.026245 kernel: audit: type=1006 audit(1768874394.907:971): pid=9087 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jan 20 01:59:55.026298 kernel: audit: type=1300 audit(1768874394.907:971): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc517b8dd0 a2=3 a3=0 items=0 ppid=1 pid=9087 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:54.907000 audit[9087]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc517b8dd0 a2=3 a3=0 items=0 ppid=1 pid=9087 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:55.063145 kernel: audit: type=1327 audit(1768874394.907:971): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:59:54.907000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:59:55.064349 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 20 01:59:55.120000 audit[9087]: USER_START pid=9087 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:55.192180 kernel: audit: type=1105 audit(1768874395.120:972): pid=9087 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:55.140000 audit[9091]: CRED_ACQ pid=9091 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:55.818207 containerd[1643]: time="2026-01-20T01:59:55.817584597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 01:59:56.216658 containerd[1643]: time="2026-01-20T01:59:56.166781050Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:59:56.227565 containerd[1643]: time="2026-01-20T01:59:56.227380761Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 01:59:56.230609 containerd[1643]: time="2026-01-20T01:59:56.227599552Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 01:59:56.324395 kubelet[3123]: E0120 01:59:56.324211 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:59:56.324395 kubelet[3123]: E0120 01:59:56.324345 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:59:56.325842 kubelet[3123]: E0120 01:59:56.324503 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:424ac945776d4646865b6465d767c112,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-trdb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8bc549748-txp25_calico-system(44944462-7130-49ee-b7c5-4cb73dea6058): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 01:59:56.388402 containerd[1643]: time="2026-01-20T01:59:56.387579172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 01:59:56.661305 sshd[9091]: Connection closed by 10.0.0.1 port 45076 Jan 20 01:59:56.669323 sshd-session[9087]: pam_unix(sshd:session): session closed for user core Jan 20 01:59:56.709030 systemd-logind[1623]: Session 29 logged out. Waiting for processes to exit. Jan 20 01:59:56.711234 systemd[1]: sshd@28-10.0.0.44:22-10.0.0.1:45076.service: Deactivated successfully. Jan 20 01:59:56.686000 audit[9087]: USER_END pid=9087 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:56.686000 audit[9087]: CRED_DISP pid=9087 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:59:56.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.44:22-10.0.0.1:45076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:59:56.720995 systemd[1]: session-29.scope: Deactivated successfully. Jan 20 01:59:56.735404 containerd[1643]: time="2026-01-20T01:59:56.732540531Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:59:56.805217 containerd[1643]: time="2026-01-20T01:59:56.805040959Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 01:59:56.805217 containerd[1643]: time="2026-01-20T01:59:56.805185461Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 01:59:56.810291 systemd-logind[1623]: Removed session 29. Jan 20 01:59:56.834646 kubelet[3123]: E0120 01:59:56.833600 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:59:56.869249 kubelet[3123]: E0120 01:59:56.869177 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:59:56.869826 kubelet[3123]: E0120 01:59:56.869657 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-trdb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8bc549748-txp25_calico-system(44944462-7130-49ee-b7c5-4cb73dea6058): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 01:59:56.870274 kubelet[3123]: E0120 01:59:56.843106 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 01:59:56.871881 kubelet[3123]: E0120 01:59:56.871799 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8bc549748-txp25" podUID="44944462-7130-49ee-b7c5-4cb73dea6058" Jan 20 01:59:58.770002 kubelet[3123]: E0120 01:59:58.769181 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:00:00.772945 kubelet[3123]: E0120 02:00:00.772846 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 02:00:00.786337 kubelet[3123]: E0120 02:00:00.786063 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 02:00:01.746473 systemd[1]: Started sshd@29-10.0.0.44:22-10.0.0.1:51220.service - OpenSSH per-connection server daemon (10.0.0.1:51220). Jan 20 02:00:01.787333 kernel: kauditd_printk_skb: 4 callbacks suppressed Jan 20 02:00:01.787508 kernel: audit: type=1130 audit(1768874401.745:977): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.44:22-10.0.0.1:51220 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:01.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.44:22-10.0.0.1:51220 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:02.164000 audit[9104]: USER_ACCT pid=9104 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:02.170905 sshd-session[9104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:00:02.190598 sshd[9104]: Accepted publickey for core from 10.0.0.1 port 51220 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:00:02.200853 kernel: audit: type=1101 audit(1768874402.164:978): pid=9104 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:02.200989 kernel: audit: type=1103 audit(1768874402.167:979): pid=9104 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:02.167000 audit[9104]: CRED_ACQ pid=9104 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:02.238880 kernel: audit: type=1006 audit(1768874402.167:980): pid=9104 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Jan 20 02:00:02.167000 audit[9104]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3126e750 a2=3 a3=0 items=0 ppid=1 pid=9104 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:00:02.256119 systemd-logind[1623]: New session 30 of user core. Jan 20 02:00:02.281817 kernel: audit: type=1300 audit(1768874402.167:980): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3126e750 a2=3 a3=0 items=0 ppid=1 pid=9104 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:00:02.281970 kernel: audit: type=1327 audit(1768874402.167:980): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:00:02.167000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:00:02.297415 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 20 02:00:02.338000 audit[9104]: USER_START pid=9104 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:02.431451 kernel: audit: type=1105 audit(1768874402.338:981): pid=9104 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:02.431595 kernel: audit: type=1103 audit(1768874402.371:982): pid=9107 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:02.371000 audit[9107]: CRED_ACQ pid=9107 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:02.780271 kubelet[3123]: E0120 02:00:02.777088 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:00:03.011937 sshd[9107]: Connection closed by 10.0.0.1 port 51220 Jan 20 02:00:03.015066 sshd-session[9104]: pam_unix(sshd:session): session closed for user core Jan 20 02:00:03.034000 audit[9104]: USER_END pid=9104 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:03.059667 systemd[1]: sshd@29-10.0.0.44:22-10.0.0.1:51220.service: Deactivated successfully. Jan 20 02:00:03.092151 systemd[1]: session-30.scope: Deactivated successfully. Jan 20 02:00:03.108046 systemd-logind[1623]: Session 30 logged out. Waiting for processes to exit. Jan 20 02:00:03.142645 kernel: audit: type=1106 audit(1768874403.034:983): pid=9104 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:03.142894 kernel: audit: type=1104 audit(1768874403.036:984): pid=9104 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:03.036000 audit[9104]: CRED_DISP pid=9104 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:03.154855 systemd-logind[1623]: Removed session 30. Jan 20 02:00:03.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.44:22-10.0.0.1:51220 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:03.847607 kubelet[3123]: E0120 02:00:03.830296 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 02:00:03.854907 kubelet[3123]: E0120 02:00:03.854193 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 02:00:08.108819 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:00:08.108960 kernel: audit: type=1130 audit(1768874408.095:986): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.44:22-10.0.0.1:48032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:08.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.44:22-10.0.0.1:48032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:08.096667 systemd[1]: Started sshd@30-10.0.0.44:22-10.0.0.1:48032.service - OpenSSH per-connection server daemon (10.0.0.1:48032). Jan 20 02:00:08.349000 audit[9120]: USER_ACCT pid=9120 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:08.358204 sshd[9120]: Accepted publickey for core from 10.0.0.1 port 48032 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:00:08.377344 sshd-session[9120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:00:08.442966 kernel: audit: type=1101 audit(1768874408.349:987): pid=9120 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:08.450069 systemd-logind[1623]: New session 31 of user core. Jan 20 02:00:08.359000 audit[9120]: CRED_ACQ pid=9120 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:08.556312 kernel: audit: type=1103 audit(1768874408.359:988): pid=9120 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:08.556489 kernel: audit: type=1006 audit(1768874408.359:989): pid=9120 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=31 res=1 Jan 20 02:00:08.359000 audit[9120]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe922f09a0 a2=3 a3=0 items=0 ppid=1 pid=9120 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:00:08.681931 kernel: audit: type=1300 audit(1768874408.359:989): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe922f09a0 a2=3 a3=0 items=0 ppid=1 pid=9120 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:00:08.683455 kernel: audit: type=1327 audit(1768874408.359:989): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:00:08.359000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:00:08.772178 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 20 02:00:08.789378 kubelet[3123]: E0120 02:00:08.788878 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8bc549748-txp25" podUID="44944462-7130-49ee-b7c5-4cb73dea6058" Jan 20 02:00:08.793000 audit[9120]: USER_START pid=9120 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:08.848641 kernel: audit: type=1105 audit(1768874408.793:990): pid=9120 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:08.849030 kernel: audit: type=1103 audit(1768874408.801:991): pid=9123 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:08.801000 audit[9123]: CRED_ACQ pid=9123 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:09.844455 sshd[9123]: Connection closed by 10.0.0.1 port 48032 Jan 20 02:00:09.861945 sshd-session[9120]: pam_unix(sshd:session): session closed for user core Jan 20 02:00:09.931000 audit[9120]: USER_END pid=9120 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:09.977663 systemd[1]: sshd@30-10.0.0.44:22-10.0.0.1:48032.service: Deactivated successfully. Jan 20 02:00:10.022471 systemd[1]: session-31.scope: Deactivated successfully. Jan 20 02:00:10.053391 systemd-logind[1623]: Session 31 logged out. Waiting for processes to exit. Jan 20 02:00:10.109600 kernel: audit: type=1106 audit(1768874409.931:992): pid=9120 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:10.109894 kernel: audit: type=1104 audit(1768874409.931:993): pid=9120 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:09.931000 audit[9120]: CRED_DISP pid=9120 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:10.073580 systemd-logind[1623]: Removed session 31. Jan 20 02:00:09.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.44:22-10.0.0.1:48032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:10.800642 kubelet[3123]: E0120 02:00:10.800150 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 02:00:12.828577 kubelet[3123]: E0120 02:00:12.808545 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 02:00:12.861312 kubelet[3123]: E0120 02:00:12.861240 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 02:00:14.947026 systemd[1]: Started sshd@31-10.0.0.44:22-10.0.0.1:38546.service - OpenSSH per-connection server daemon (10.0.0.1:38546). Jan 20 02:00:14.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.44:22-10.0.0.1:38546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:15.034860 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:00:15.035126 kernel: audit: type=1130 audit(1768874414.946:995): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.44:22-10.0.0.1:38546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:15.617000 audit[9139]: USER_ACCT pid=9139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:15.630352 sshd-session[9139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:00:15.632171 sshd[9139]: Accepted publickey for core from 10.0.0.1 port 38546 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:00:15.621000 audit[9139]: CRED_ACQ pid=9139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:15.682495 systemd-logind[1623]: New session 32 of user core. Jan 20 02:00:15.748386 kernel: audit: type=1101 audit(1768874415.617:996): pid=9139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:15.750952 kernel: audit: type=1103 audit(1768874415.621:997): pid=9139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:15.751088 kernel: audit: type=1006 audit(1768874415.621:998): pid=9139 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=32 res=1 Jan 20 02:00:15.798584 kernel: audit: type=1300 audit(1768874415.621:998): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0f3e19d0 a2=3 a3=0 items=0 ppid=1 pid=9139 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:00:15.621000 audit[9139]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0f3e19d0 a2=3 a3=0 items=0 ppid=1 pid=9139 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:00:15.807820 kubelet[3123]: E0120 02:00:15.800482 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 02:00:15.621000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:00:15.899571 kernel: audit: type=1327 audit(1768874415.621:998): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:00:15.932148 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 20 02:00:15.967000 audit[9139]: USER_START pid=9139 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:16.017000 audit[9142]: CRED_ACQ pid=9142 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:16.096028 kernel: audit: type=1105 audit(1768874415.967:999): pid=9139 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:16.096224 kernel: audit: type=1103 audit(1768874416.017:1000): pid=9142 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:17.001485 sshd[9142]: Connection closed by 10.0.0.1 port 38546 Jan 20 02:00:17.000520 sshd-session[9139]: pam_unix(sshd:session): session closed for user core Jan 20 02:00:17.005000 audit[9139]: USER_END pid=9139 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:17.029580 systemd[1]: sshd@31-10.0.0.44:22-10.0.0.1:38546.service: Deactivated successfully. Jan 20 02:00:17.056212 systemd[1]: session-32.scope: Deactivated successfully. Jan 20 02:00:17.072995 systemd-logind[1623]: Session 32 logged out. Waiting for processes to exit. Jan 20 02:00:17.097222 systemd-logind[1623]: Removed session 32. Jan 20 02:00:17.113593 kernel: audit: type=1106 audit(1768874417.005:1001): pid=9139 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:17.113855 kernel: audit: type=1104 audit(1768874417.005:1002): pid=9139 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:17.005000 audit[9139]: CRED_DISP pid=9139 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:17.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.44:22-10.0.0.1:38546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:18.808629 kubelet[3123]: E0120 02:00:18.800005 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 02:00:22.129057 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:00:22.129249 kernel: audit: type=1130 audit(1768874422.066:1004): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.44:22-10.0.0.1:38568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:22.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.44:22-10.0.0.1:38568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:22.068962 systemd[1]: Started sshd@32-10.0.0.44:22-10.0.0.1:38568.service - OpenSSH per-connection server daemon (10.0.0.1:38568). Jan 20 02:00:23.052224 kubelet[3123]: E0120 02:00:23.049492 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 02:00:24.156426 kubelet[3123]: E0120 02:00:24.156323 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8bc549748-txp25" podUID="44944462-7130-49ee-b7c5-4cb73dea6058" Jan 20 02:00:24.232000 audit[9180]: USER_ACCT pid=9180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:24.242542 sshd-session[9180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:00:24.248061 sshd[9180]: Accepted publickey for core from 10.0.0.1 port 38568 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:00:24.336114 kernel: audit: type=1101 audit(1768874424.232:1005): pid=9180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:24.519975 kernel: audit: type=1103 audit(1768874424.232:1006): pid=9180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:24.539278 kernel: audit: type=1006 audit(1768874424.232:1007): pid=9180 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=33 res=1 Jan 20 02:00:24.539362 kernel: audit: type=1300 audit(1768874424.232:1007): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff8fac7270 a2=3 a3=0 items=0 ppid=1 pid=9180 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:00:24.552088 kernel: audit: type=1327 audit(1768874424.232:1007): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:00:24.232000 audit[9180]: CRED_ACQ pid=9180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:24.232000 audit[9180]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff8fac7270 a2=3 a3=0 items=0 ppid=1 pid=9180 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:00:24.232000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:00:24.585427 systemd-logind[1623]: New session 33 of user core. Jan 20 02:00:24.630370 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 20 02:00:26.050000 audit[9180]: USER_START pid=9180 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:26.178141 kernel: audit: type=1105 audit(1768874426.050:1008): pid=9180 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:26.248000 audit[9185]: CRED_ACQ pid=9185 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:26.365265 kernel: audit: type=1103 audit(1768874426.248:1009): pid=9185 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:26.848648 kubelet[3123]: E0120 02:00:26.848389 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:00:26.907498 kubelet[3123]: E0120 02:00:26.864408 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 02:00:27.016325 kubelet[3123]: E0120 02:00:27.008927 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 02:00:27.445790 sshd[9185]: Connection closed by 10.0.0.1 port 38568 Jan 20 02:00:27.450639 sshd-session[9180]: pam_unix(sshd:session): session closed for user core Jan 20 02:00:27.467000 audit[9180]: USER_END pid=9180 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:27.493871 systemd[1]: sshd@32-10.0.0.44:22-10.0.0.1:38568.service: Deactivated successfully. Jan 20 02:00:27.519426 systemd[1]: session-33.scope: Deactivated successfully. Jan 20 02:00:27.528334 systemd-logind[1623]: Session 33 logged out. Waiting for processes to exit. Jan 20 02:00:27.534120 systemd-logind[1623]: Removed session 33. Jan 20 02:00:27.576579 kernel: audit: type=1106 audit(1768874427.467:1010): pid=9180 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:27.467000 audit[9180]: CRED_DISP pid=9180 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:27.627203 kernel: audit: type=1104 audit(1768874427.467:1011): pid=9180 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:27.630912 kernel: audit: type=1131 audit(1768874427.509:1012): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.44:22-10.0.0.1:38568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:27.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.44:22-10.0.0.1:38568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:30.837269 kubelet[3123]: E0120 02:00:30.830138 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 02:00:31.778816 kubelet[3123]: E0120 02:00:31.776239 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 02:00:32.551344 systemd[1]: Started sshd@33-10.0.0.44:22-10.0.0.1:47714.service - OpenSSH per-connection server daemon (10.0.0.1:47714). Jan 20 02:00:32.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.44:22-10.0.0.1:47714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:32.608145 kernel: audit: type=1130 audit(1768874432.545:1013): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.44:22-10.0.0.1:47714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:33.415000 audit[9199]: USER_ACCT pid=9199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:33.463194 sshd-session[9199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:00:33.522839 sshd[9199]: Accepted publickey for core from 10.0.0.1 port 47714 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:00:33.539816 kernel: audit: type=1101 audit(1768874433.415:1014): pid=9199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:33.540006 kernel: audit: type=1103 audit(1768874433.438:1015): pid=9199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:33.438000 audit[9199]: CRED_ACQ pid=9199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:33.647935 kernel: audit: type=1006 audit(1768874433.438:1016): pid=9199 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=34 res=1 Jan 20 02:00:33.648102 kernel: audit: type=1300 audit(1768874433.438:1016): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe42a93390 a2=3 a3=0 items=0 ppid=1 pid=9199 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:00:33.438000 audit[9199]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe42a93390 a2=3 a3=0 items=0 ppid=1 pid=9199 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:00:33.676169 systemd-logind[1623]: New session 34 of user core. Jan 20 02:00:33.726641 kernel: audit: type=1327 audit(1768874433.438:1016): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:00:33.438000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:00:33.758165 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 20 02:00:33.851000 audit[9199]: USER_START pid=9199 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:33.930841 kernel: audit: type=1105 audit(1768874433.851:1017): pid=9199 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:33.867000 audit[9206]: CRED_ACQ pid=9206 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:34.000835 kernel: audit: type=1103 audit(1768874433.867:1018): pid=9206 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:34.767814 kubelet[3123]: E0120 02:00:34.767632 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:00:34.938665 sshd[9206]: Connection closed by 10.0.0.1 port 47714 Jan 20 02:00:34.927853 sshd-session[9199]: pam_unix(sshd:session): session closed for user core Jan 20 02:00:34.948000 audit[9199]: USER_END pid=9199 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:35.032076 kernel: audit: type=1106 audit(1768874434.948:1019): pid=9199 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:34.949000 audit[9199]: CRED_DISP pid=9199 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:35.044061 systemd[1]: sshd@33-10.0.0.44:22-10.0.0.1:47714.service: Deactivated successfully. Jan 20 02:00:35.095022 systemd[1]: session-34.scope: Deactivated successfully. Jan 20 02:00:35.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.44:22-10.0.0.1:47714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:35.102862 kernel: audit: type=1104 audit(1768874434.949:1020): pid=9199 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:35.110912 systemd-logind[1623]: Session 34 logged out. Waiting for processes to exit. Jan 20 02:00:35.152552 systemd-logind[1623]: Removed session 34. Jan 20 02:00:37.782001 kubelet[3123]: E0120 02:00:37.781881 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 02:00:37.792223 kubelet[3123]: E0120 02:00:37.790396 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8bc549748-txp25" podUID="44944462-7130-49ee-b7c5-4cb73dea6058" Jan 20 02:00:40.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.44:22-10.0.0.1:41916 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:40.068534 systemd[1]: Started sshd@34-10.0.0.44:22-10.0.0.1:41916.service - OpenSSH per-connection server daemon (10.0.0.1:41916). Jan 20 02:00:40.092272 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:00:40.092430 kernel: audit: type=1130 audit(1768874440.067:1022): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.44:22-10.0.0.1:41916 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:40.504000 audit[9221]: USER_ACCT pid=9221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:40.548279 sshd[9221]: Accepted publickey for core from 10.0.0.1 port 41916 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:00:40.559549 sshd-session[9221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:00:40.574920 kernel: audit: type=1101 audit(1768874440.504:1023): pid=9221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:40.556000 audit[9221]: CRED_ACQ pid=9221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:40.619169 kernel: audit: type=1103 audit(1768874440.556:1024): pid=9221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:40.626783 systemd-logind[1623]: New session 35 of user core. Jan 20 02:00:40.556000 audit[9221]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffee7ee54e0 a2=3 a3=0 items=0 ppid=1 pid=9221 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:00:40.691061 kernel: audit: type=1006 audit(1768874440.556:1025): pid=9221 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=35 res=1 Jan 20 02:00:40.691252 kernel: audit: type=1300 audit(1768874440.556:1025): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffee7ee54e0 a2=3 a3=0 items=0 ppid=1 pid=9221 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:00:40.691338 kernel: audit: type=1327 audit(1768874440.556:1025): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:00:40.556000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:00:40.712970 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 20 02:00:40.730000 audit[9221]: USER_START pid=9221 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:40.779925 kernel: audit: type=1105 audit(1768874440.730:1026): pid=9221 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:40.780586 kernel: audit: type=1103 audit(1768874440.742:1027): pid=9226 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:40.742000 audit[9226]: CRED_ACQ pid=9226 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:41.379514 sshd[9226]: Connection closed by 10.0.0.1 port 41916 Jan 20 02:00:41.389877 sshd-session[9221]: pam_unix(sshd:session): session closed for user core Jan 20 02:00:41.410000 audit[9221]: USER_END pid=9221 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:41.471130 systemd[1]: sshd@34-10.0.0.44:22-10.0.0.1:41916.service: Deactivated successfully. Jan 20 02:00:41.493428 kernel: audit: type=1106 audit(1768874441.410:1028): pid=9221 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:41.410000 audit[9221]: CRED_DISP pid=9221 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:41.497120 systemd[1]: session-35.scope: Deactivated successfully. Jan 20 02:00:41.510302 systemd-logind[1623]: Session 35 logged out. Waiting for processes to exit. Jan 20 02:00:41.517375 systemd-logind[1623]: Removed session 35. Jan 20 02:00:41.518815 kernel: audit: type=1104 audit(1768874441.410:1029): pid=9221 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:41.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.44:22-10.0.0.1:41916 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:41.830004 kubelet[3123]: E0120 02:00:41.823557 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 02:00:41.830004 kubelet[3123]: E0120 02:00:41.828200 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 02:00:42.778038 kubelet[3123]: E0120 02:00:42.772917 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 02:00:42.791637 kubelet[3123]: E0120 02:00:42.791594 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:00:43.796556 containerd[1643]: time="2026-01-20T02:00:43.793980531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:00:44.047484 containerd[1643]: time="2026-01-20T02:00:44.044600502Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:00:44.062488 containerd[1643]: time="2026-01-20T02:00:44.059090305Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:00:44.062488 containerd[1643]: time="2026-01-20T02:00:44.059338385Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:00:44.070797 kubelet[3123]: E0120 02:00:44.059895 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:00:44.070797 kubelet[3123]: E0120 02:00:44.060199 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:00:44.075624 kubelet[3123]: E0120 02:00:44.071142 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gbt2k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74b798b596-wbvft_calico-apiserver(589f656f-1e0a-4667-bc0d-42908aab3340): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:00:44.075624 kubelet[3123]: E0120 02:00:44.072395 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 02:00:46.450537 systemd[1]: Started sshd@35-10.0.0.44:22-10.0.0.1:59910.service - OpenSSH per-connection server daemon (10.0.0.1:59910). Jan 20 02:00:46.487849 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:00:46.488084 kernel: audit: type=1130 audit(1768874446.450:1031): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.44:22-10.0.0.1:59910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:46.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.44:22-10.0.0.1:59910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:46.763000 audit[9239]: USER_ACCT pid=9239 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:46.773023 sshd[9239]: Accepted publickey for core from 10.0.0.1 port 59910 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:00:46.775399 sshd-session[9239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:00:46.808956 kernel: audit: type=1101 audit(1768874446.763:1032): pid=9239 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:46.809104 kernel: audit: type=1103 audit(1768874446.770:1033): pid=9239 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:46.770000 audit[9239]: CRED_ACQ pid=9239 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:46.812399 systemd-logind[1623]: New session 36 of user core. Jan 20 02:00:46.868812 kernel: audit: type=1006 audit(1768874446.772:1034): pid=9239 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=36 res=1 Jan 20 02:00:46.869111 kernel: audit: type=1300 audit(1768874446.772:1034): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1f63cbc0 a2=3 a3=0 items=0 ppid=1 pid=9239 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=36 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:00:46.772000 audit[9239]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1f63cbc0 a2=3 a3=0 items=0 ppid=1 pid=9239 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=36 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:00:46.898933 kernel: audit: type=1327 audit(1768874446.772:1034): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:00:46.772000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:00:46.915940 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 20 02:00:46.935000 audit[9239]: USER_START pid=9239 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:46.979799 kernel: audit: type=1105 audit(1768874446.935:1035): pid=9239 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:47.048211 kernel: audit: type=1103 audit(1768874446.997:1036): pid=9242 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:46.997000 audit[9242]: CRED_ACQ pid=9242 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:47.455875 sshd[9242]: Connection closed by 10.0.0.1 port 59910 Jan 20 02:00:47.454809 sshd-session[9239]: pam_unix(sshd:session): session closed for user core Jan 20 02:00:47.460000 audit[9239]: USER_END pid=9239 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:47.470405 systemd-logind[1623]: Session 36 logged out. Waiting for processes to exit. Jan 20 02:00:47.470996 systemd[1]: sshd@35-10.0.0.44:22-10.0.0.1:59910.service: Deactivated successfully. Jan 20 02:00:47.482110 systemd[1]: session-36.scope: Deactivated successfully. Jan 20 02:00:47.492455 systemd-logind[1623]: Removed session 36. Jan 20 02:00:47.504901 kernel: audit: type=1106 audit(1768874447.460:1037): pid=9239 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:47.505030 kernel: audit: type=1104 audit(1768874447.460:1038): pid=9239 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:47.460000 audit[9239]: CRED_DISP pid=9239 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:47.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.44:22-10.0.0.1:59910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:48.780929 kubelet[3123]: E0120 02:00:48.778388 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8bc549748-txp25" podUID="44944462-7130-49ee-b7c5-4cb73dea6058" Jan 20 02:00:49.793856 kubelet[3123]: E0120 02:00:49.792185 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:00:52.525497 systemd[1]: Started sshd@36-10.0.0.44:22-10.0.0.1:59938.service - OpenSSH per-connection server daemon (10.0.0.1:59938). Jan 20 02:00:52.561994 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:00:52.562310 kernel: audit: type=1130 audit(1768874452.523:1040): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.44:22-10.0.0.1:59938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:52.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.44:22-10.0.0.1:59938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:52.798202 kubelet[3123]: E0120 02:00:52.795402 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 02:00:52.798202 kubelet[3123]: E0120 02:00:52.796134 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 02:00:52.810105 kubelet[3123]: E0120 02:00:52.808556 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 02:00:53.335000 audit[9286]: USER_ACCT pid=9286 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:53.339481 sshd[9286]: Accepted publickey for core from 10.0.0.1 port 59938 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:00:53.358331 sshd-session[9286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:00:53.403048 kernel: audit: type=1101 audit(1768874453.335:1041): pid=9286 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:53.403661 kernel: audit: type=1103 audit(1768874453.343:1042): pid=9286 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:53.343000 audit[9286]: CRED_ACQ pid=9286 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:53.418141 systemd-logind[1623]: New session 37 of user core. Jan 20 02:00:53.446387 kernel: audit: type=1006 audit(1768874453.345:1043): pid=9286 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=37 res=1 Jan 20 02:00:53.345000 audit[9286]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffbcf8b270 a2=3 a3=0 items=0 ppid=1 pid=9286 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=37 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:00:53.345000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:00:53.535248 kernel: audit: type=1300 audit(1768874453.345:1043): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffbcf8b270 a2=3 a3=0 items=0 ppid=1 pid=9286 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=37 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:00:53.535354 kernel: audit: type=1327 audit(1768874453.345:1043): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:00:53.541402 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 20 02:00:53.584000 audit[9286]: USER_START pid=9286 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:53.647472 kernel: audit: type=1105 audit(1768874453.584:1044): pid=9286 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:53.647656 kernel: audit: type=1103 audit(1768874453.606:1045): pid=9289 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:53.606000 audit[9289]: CRED_ACQ pid=9289 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:54.401189 sshd[9289]: Connection closed by 10.0.0.1 port 59938 Jan 20 02:00:54.408580 sshd-session[9286]: pam_unix(sshd:session): session closed for user core Jan 20 02:00:54.410000 audit[9286]: USER_END pid=9286 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:54.440000 audit[9286]: CRED_DISP pid=9286 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:54.528497 systemd[1]: sshd@36-10.0.0.44:22-10.0.0.1:59938.service: Deactivated successfully. Jan 20 02:00:54.543413 systemd[1]: session-37.scope: Deactivated successfully. Jan 20 02:00:54.569470 systemd-logind[1623]: Session 37 logged out. Waiting for processes to exit. Jan 20 02:00:54.602760 kernel: audit: type=1106 audit(1768874454.410:1046): pid=9286 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:54.602884 kernel: audit: type=1104 audit(1768874454.440:1047): pid=9286 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:54.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.44:22-10.0.0.1:59938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:54.606555 systemd-logind[1623]: Removed session 37. Jan 20 02:00:56.810920 containerd[1643]: time="2026-01-20T02:00:56.799987805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:00:57.009390 containerd[1643]: time="2026-01-20T02:00:57.007535949Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:00:57.025424 containerd[1643]: time="2026-01-20T02:00:57.021825005Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:00:57.025424 containerd[1643]: time="2026-01-20T02:00:57.021962471Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:00:57.025810 kubelet[3123]: E0120 02:00:57.022340 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:00:57.025810 kubelet[3123]: E0120 02:00:57.022497 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:00:57.025810 kubelet[3123]: E0120 02:00:57.022673 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvpjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74b798b596-r7ptx_calico-apiserver(03f653dd-0210-41e9-9d70-a3905826baa1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:00:57.027472 kubelet[3123]: E0120 02:00:57.026891 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 02:00:57.815055 kubelet[3123]: E0120 02:00:57.813921 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 02:00:59.516284 systemd[1]: Started sshd@37-10.0.0.44:22-10.0.0.1:54898.service - OpenSSH per-connection server daemon (10.0.0.1:54898). Jan 20 02:00:59.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-10.0.0.44:22-10.0.0.1:54898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:59.571664 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:00:59.571878 kernel: audit: type=1130 audit(1768874459.543:1049): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-10.0.0.44:22-10.0.0.1:54898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:00.155368 sshd[9309]: Accepted publickey for core from 10.0.0.1 port 54898 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:01:00.154000 audit[9309]: USER_ACCT pid=9309 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:00.174123 sshd-session[9309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:01:00.229843 kernel: audit: type=1101 audit(1768874460.154:1050): pid=9309 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:00.164000 audit[9309]: CRED_ACQ pid=9309 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:00.257390 systemd-logind[1623]: New session 38 of user core. Jan 20 02:01:00.352617 kernel: audit: type=1103 audit(1768874460.164:1051): pid=9309 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:00.352830 kernel: audit: type=1006 audit(1768874460.164:1052): pid=9309 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=38 res=1 Jan 20 02:01:00.347242 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 20 02:01:00.164000 audit[9309]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff39b54d50 a2=3 a3=0 items=0 ppid=1 pid=9309 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=38 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:01:00.459234 kernel: audit: type=1300 audit(1768874460.164:1052): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff39b54d50 a2=3 a3=0 items=0 ppid=1 pid=9309 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=38 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:01:00.164000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:01:00.507822 kernel: audit: type=1327 audit(1768874460.164:1052): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:01:00.364000 audit[9309]: USER_START pid=9309 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:00.576808 kernel: audit: type=1105 audit(1768874460.364:1053): pid=9309 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:00.576971 kernel: audit: type=1103 audit(1768874460.455:1054): pid=9312 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:00.455000 audit[9312]: CRED_ACQ pid=9312 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:01.359039 sshd[9312]: Connection closed by 10.0.0.1 port 54898 Jan 20 02:01:01.362350 sshd-session[9309]: pam_unix(sshd:session): session closed for user core Jan 20 02:01:01.408000 audit[9309]: USER_END pid=9309 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:01.432432 systemd[1]: sshd@37-10.0.0.44:22-10.0.0.1:54898.service: Deactivated successfully. Jan 20 02:01:01.460190 systemd[1]: session-38.scope: Deactivated successfully. Jan 20 02:01:01.479297 systemd-logind[1623]: Session 38 logged out. Waiting for processes to exit. Jan 20 02:01:01.512344 kernel: audit: type=1106 audit(1768874461.408:1055): pid=9309 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:01.408000 audit[9309]: CRED_DISP pid=9309 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:01.543253 systemd-logind[1623]: Removed session 38. Jan 20 02:01:01.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-10.0.0.44:22-10.0.0.1:54898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:01.605421 kernel: audit: type=1104 audit(1768874461.408:1056): pid=9309 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:02.796373 kubelet[3123]: E0120 02:01:02.781305 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:01:02.813273 kubelet[3123]: E0120 02:01:02.813066 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8bc549748-txp25" podUID="44944462-7130-49ee-b7c5-4cb73dea6058" Jan 20 02:01:05.817791 containerd[1643]: time="2026-01-20T02:01:05.817503123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 02:01:05.951780 containerd[1643]: time="2026-01-20T02:01:05.949285258Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:01:05.964331 containerd[1643]: time="2026-01-20T02:01:05.961440294Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 02:01:05.964331 containerd[1643]: time="2026-01-20T02:01:05.961632135Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 02:01:05.966149 kubelet[3123]: E0120 02:01:05.965454 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 02:01:05.966149 kubelet[3123]: E0120 02:01:05.965578 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 02:01:05.975995 kubelet[3123]: E0120 02:01:05.969799 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vgfnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-szpvj_calico-system(285811f9-e547-431f-a7b0-90e1226d2f4d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 02:01:05.975995 kubelet[3123]: E0120 02:01:05.975303 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 02:01:05.980308 containerd[1643]: time="2026-01-20T02:01:05.979224465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 02:01:06.133326 containerd[1643]: time="2026-01-20T02:01:06.129272543Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:01:06.149132 containerd[1643]: time="2026-01-20T02:01:06.144742170Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 02:01:06.149132 containerd[1643]: time="2026-01-20T02:01:06.144924693Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 02:01:06.154155 kubelet[3123]: E0120 02:01:06.154074 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 02:01:06.154155 kubelet[3123]: E0120 02:01:06.154147 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 02:01:06.171570 kubelet[3123]: E0120 02:01:06.162432 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pb2hv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x6f5h_calico-system(eeb09d5e-8a63-4fca-910b-ea49fa1ecf05): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 02:01:06.250213 containerd[1643]: time="2026-01-20T02:01:06.244634108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 02:01:06.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-10.0.0.44:22-10.0.0.1:57358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:06.499839 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:01:06.500027 kernel: audit: type=1130 audit(1768874466.469:1058): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-10.0.0.44:22-10.0.0.1:57358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:06.500068 containerd[1643]: time="2026-01-20T02:01:06.442318266Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:01:06.500068 containerd[1643]: time="2026-01-20T02:01:06.461263824Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 02:01:06.500068 containerd[1643]: time="2026-01-20T02:01:06.461412241Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 02:01:06.468795 systemd[1]: Started sshd@38-10.0.0.44:22-10.0.0.1:57358.service - OpenSSH per-connection server daemon (10.0.0.1:57358). Jan 20 02:01:06.505600 kubelet[3123]: E0120 02:01:06.503032 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 02:01:06.505600 kubelet[3123]: E0120 02:01:06.503103 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 02:01:06.505600 kubelet[3123]: E0120 02:01:06.503258 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pb2hv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x6f5h_calico-system(eeb09d5e-8a63-4fca-910b-ea49fa1ecf05): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 02:01:06.505600 kubelet[3123]: E0120 02:01:06.504541 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 02:01:06.795374 containerd[1643]: time="2026-01-20T02:01:06.777811940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 02:01:06.979284 containerd[1643]: time="2026-01-20T02:01:06.978142090Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:01:06.982000 audit[9328]: USER_ACCT pid=9328 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:07.012382 sshd-session[9328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:01:07.021875 containerd[1643]: time="2026-01-20T02:01:07.011471200Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 02:01:07.021875 containerd[1643]: time="2026-01-20T02:01:07.011592954Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 02:01:07.022054 sshd[9328]: Accepted publickey for core from 10.0.0.1 port 57358 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:01:07.037447 kubelet[3123]: E0120 02:01:07.036810 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 02:01:07.037447 kubelet[3123]: E0120 02:01:07.036890 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 02:01:07.037447 kubelet[3123]: E0120 02:01:07.037126 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlmjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-86d7bc7b4f-k5t2j_calico-system(68cbc571-4445-4166-912c-8fdfe252aae2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 02:01:07.040107 kubelet[3123]: E0120 02:01:07.040044 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 02:01:07.058398 kernel: audit: type=1101 audit(1768874466.982:1059): pid=9328 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:07.010000 audit[9328]: CRED_ACQ pid=9328 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:07.096871 systemd-logind[1623]: New session 39 of user core. Jan 20 02:01:07.138464 kernel: audit: type=1103 audit(1768874467.010:1060): pid=9328 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:07.138647 kernel: audit: type=1006 audit(1768874467.010:1061): pid=9328 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=39 res=1 Jan 20 02:01:07.010000 audit[9328]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeeff4d920 a2=3 a3=0 items=0 ppid=1 pid=9328 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=39 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:01:07.010000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:01:07.301390 kernel: audit: type=1300 audit(1768874467.010:1061): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeeff4d920 a2=3 a3=0 items=0 ppid=1 pid=9328 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=39 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:01:07.301561 kernel: audit: type=1327 audit(1768874467.010:1061): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:01:07.319034 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 20 02:01:07.341000 audit[9328]: USER_START pid=9328 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:07.389000 audit[9331]: CRED_ACQ pid=9331 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:07.511663 kernel: audit: type=1105 audit(1768874467.341:1062): pid=9328 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:07.511893 kernel: audit: type=1103 audit(1768874467.389:1063): pid=9331 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:07.802628 kubelet[3123]: E0120 02:01:07.800629 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:01:07.827225 kubelet[3123]: E0120 02:01:07.825244 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 02:01:08.237625 sshd[9331]: Connection closed by 10.0.0.1 port 57358 Jan 20 02:01:08.247574 sshd-session[9328]: pam_unix(sshd:session): session closed for user core Jan 20 02:01:08.254000 audit[9328]: USER_END pid=9328 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:08.297592 systemd[1]: sshd@38-10.0.0.44:22-10.0.0.1:57358.service: Deactivated successfully. Jan 20 02:01:08.254000 audit[9328]: CRED_DISP pid=9328 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:08.297653 systemd-logind[1623]: Session 39 logged out. Waiting for processes to exit. Jan 20 02:01:08.328530 systemd[1]: session-39.scope: Deactivated successfully. Jan 20 02:01:08.356786 systemd-logind[1623]: Removed session 39. Jan 20 02:01:08.364233 kernel: audit: type=1106 audit(1768874468.254:1064): pid=9328 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:08.364340 kernel: audit: type=1104 audit(1768874468.254:1065): pid=9328 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:08.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-10.0.0.44:22-10.0.0.1:57358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:08.782551 kubelet[3123]: E0120 02:01:08.781443 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 02:01:10.773493 kubelet[3123]: E0120 02:01:10.767596 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:01:13.339128 systemd[1]: Started sshd@39-10.0.0.44:22-10.0.0.1:57382.service - OpenSSH per-connection server daemon (10.0.0.1:57382). Jan 20 02:01:13.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-10.0.0.44:22-10.0.0.1:57382 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:13.362627 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:01:13.363032 kernel: audit: type=1130 audit(1768874473.339:1067): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-10.0.0.44:22-10.0.0.1:57382 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:13.948028 sshd[9347]: Accepted publickey for core from 10.0.0.1 port 57382 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:01:13.947000 audit[9347]: USER_ACCT pid=9347 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:13.962584 sshd-session[9347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:01:14.076525 kernel: audit: type=1101 audit(1768874473.947:1068): pid=9347 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:13.949000 audit[9347]: CRED_ACQ pid=9347 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:14.155248 kernel: audit: type=1103 audit(1768874473.949:1069): pid=9347 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:14.155370 kernel: audit: type=1006 audit(1768874473.961:1070): pid=9347 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=40 res=1 Jan 20 02:01:13.961000 audit[9347]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff245e8600 a2=3 a3=0 items=0 ppid=1 pid=9347 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=40 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:01:14.209869 systemd-logind[1623]: New session 40 of user core. Jan 20 02:01:14.321094 kernel: audit: type=1300 audit(1768874473.961:1070): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff245e8600 a2=3 a3=0 items=0 ppid=1 pid=9347 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=40 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:01:14.321152 kernel: audit: type=1327 audit(1768874473.961:1070): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:01:13.961000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:01:14.418848 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 20 02:01:14.452000 audit[9347]: USER_START pid=9347 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:14.600074 kernel: audit: type=1105 audit(1768874474.452:1071): pid=9347 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:14.603000 audit[9350]: CRED_ACQ pid=9350 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:14.706835 kernel: audit: type=1103 audit(1768874474.603:1072): pid=9350 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:15.701023 sshd[9350]: Connection closed by 10.0.0.1 port 57382 Jan 20 02:01:15.714208 sshd-session[9347]: pam_unix(sshd:session): session closed for user core Jan 20 02:01:15.715000 audit[9347]: USER_END pid=9347 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:15.733502 systemd[1]: sshd@39-10.0.0.44:22-10.0.0.1:57382.service: Deactivated successfully. Jan 20 02:01:15.746276 systemd[1]: session-40.scope: Deactivated successfully. Jan 20 02:01:15.748830 systemd-logind[1623]: Session 40 logged out. Waiting for processes to exit. Jan 20 02:01:15.715000 audit[9347]: CRED_DISP pid=9347 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:15.832261 systemd-logind[1623]: Removed session 40. Jan 20 02:01:15.855928 kernel: audit: type=1106 audit(1768874475.715:1073): pid=9347 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:15.856087 kernel: audit: type=1104 audit(1768874475.715:1074): pid=9347 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:15.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-10.0.0.44:22-10.0.0.1:57382 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:17.794212 kubelet[3123]: E0120 02:01:17.779849 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:01:17.836804 containerd[1643]: time="2026-01-20T02:01:17.827632613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 02:01:17.990416 containerd[1643]: time="2026-01-20T02:01:17.990293507Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:01:18.009888 containerd[1643]: time="2026-01-20T02:01:18.009194694Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 02:01:18.009888 containerd[1643]: time="2026-01-20T02:01:18.009314934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 02:01:18.013482 kubelet[3123]: E0120 02:01:18.010912 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 02:01:18.013482 kubelet[3123]: E0120 02:01:18.010997 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 02:01:18.013482 kubelet[3123]: E0120 02:01:18.011152 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:424ac945776d4646865b6465d767c112,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-trdb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8bc549748-txp25_calico-system(44944462-7130-49ee-b7c5-4cb73dea6058): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 02:01:18.028232 containerd[1643]: time="2026-01-20T02:01:18.028179420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 02:01:18.222921 containerd[1643]: time="2026-01-20T02:01:18.222219800Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:01:18.237611 containerd[1643]: time="2026-01-20T02:01:18.232076068Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 02:01:18.237991 containerd[1643]: time="2026-01-20T02:01:18.237617807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 02:01:18.239504 kubelet[3123]: E0120 02:01:18.238584 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 02:01:18.239504 kubelet[3123]: E0120 02:01:18.238669 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 02:01:18.239504 kubelet[3123]: E0120 02:01:18.238971 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-trdb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8bc549748-txp25_calico-system(44944462-7130-49ee-b7c5-4cb73dea6058): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 02:01:18.248040 kubelet[3123]: E0120 02:01:18.247868 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8bc549748-txp25" podUID="44944462-7130-49ee-b7c5-4cb73dea6058" Jan 20 02:01:18.776581 kubelet[3123]: E0120 02:01:18.776515 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 02:01:19.836370 kubelet[3123]: E0120 02:01:19.836313 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 02:01:20.769916 systemd[1]: Started sshd@40-10.0.0.44:22-10.0.0.1:42986.service - OpenSSH per-connection server daemon (10.0.0.1:42986). Jan 20 02:01:20.804208 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:01:20.804392 kernel: audit: type=1130 audit(1768874480.772:1076): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-10.0.0.44:22-10.0.0.1:42986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:20.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-10.0.0.44:22-10.0.0.1:42986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:20.813913 kubelet[3123]: E0120 02:01:20.807655 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 02:01:20.910410 kubelet[3123]: E0120 02:01:20.906617 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 02:01:21.444000 audit[9408]: USER_ACCT pid=9408 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:21.465483 sshd[9408]: Accepted publickey for core from 10.0.0.1 port 42986 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:01:21.466764 sshd-session[9408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:01:21.519865 systemd-logind[1623]: New session 41 of user core. Jan 20 02:01:21.530830 kernel: audit: type=1101 audit(1768874481.444:1077): pid=9408 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:21.530962 kernel: audit: type=1103 audit(1768874481.460:1078): pid=9408 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:21.460000 audit[9408]: CRED_ACQ pid=9408 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:21.564288 kernel: audit: type=1006 audit(1768874481.460:1079): pid=9408 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=41 res=1 Jan 20 02:01:21.460000 audit[9408]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4685a110 a2=3 a3=0 items=0 ppid=1 pid=9408 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=41 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:01:21.603085 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 20 02:01:21.460000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:01:21.700358 kernel: audit: type=1300 audit(1768874481.460:1079): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4685a110 a2=3 a3=0 items=0 ppid=1 pid=9408 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=41 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:01:21.700542 kernel: audit: type=1327 audit(1768874481.460:1079): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:01:21.623000 audit[9408]: USER_START pid=9408 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:21.764854 kernel: audit: type=1105 audit(1768874481.623:1080): pid=9408 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:21.636000 audit[9411]: CRED_ACQ pid=9411 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:21.773553 kernel: audit: type=1103 audit(1768874481.636:1081): pid=9411 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:22.562025 sshd[9411]: Connection closed by 10.0.0.1 port 42986 Jan 20 02:01:22.559076 sshd-session[9408]: pam_unix(sshd:session): session closed for user core Jan 20 02:01:22.564000 audit[9408]: USER_END pid=9408 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:22.616843 systemd[1]: sshd@40-10.0.0.44:22-10.0.0.1:42986.service: Deactivated successfully. Jan 20 02:01:22.630029 systemd[1]: session-41.scope: Deactivated successfully. Jan 20 02:01:22.656491 systemd-logind[1623]: Session 41 logged out. Waiting for processes to exit. Jan 20 02:01:22.564000 audit[9408]: CRED_DISP pid=9408 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:22.666423 systemd-logind[1623]: Removed session 41. Jan 20 02:01:22.752374 kernel: audit: type=1106 audit(1768874482.564:1082): pid=9408 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:22.752538 kernel: audit: type=1104 audit(1768874482.564:1083): pid=9408 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:22.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-10.0.0.44:22-10.0.0.1:42986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:23.822516 kubelet[3123]: E0120 02:01:23.820854 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 02:01:27.665070 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:01:27.665349 kernel: audit: type=1130 audit(1768874487.636:1085): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-10.0.0.44:22-10.0.0.1:33380 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:27.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-10.0.0.44:22-10.0.0.1:33380 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:27.638271 systemd[1]: Started sshd@41-10.0.0.44:22-10.0.0.1:33380.service - OpenSSH per-connection server daemon (10.0.0.1:33380). Jan 20 02:01:28.056000 audit[9425]: USER_ACCT pid=9425 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:28.061454 sshd[9425]: Accepted publickey for core from 10.0.0.1 port 33380 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:01:28.080419 sshd-session[9425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:01:28.148373 kernel: audit: type=1101 audit(1768874488.056:1086): pid=9425 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:28.149832 kernel: audit: type=1103 audit(1768874488.077:1087): pid=9425 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:28.077000 audit[9425]: CRED_ACQ pid=9425 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:28.138290 systemd-logind[1623]: New session 42 of user core. Jan 20 02:01:28.194898 kernel: audit: type=1006 audit(1768874488.077:1088): pid=9425 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=42 res=1 Jan 20 02:01:28.077000 audit[9425]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc116acd0 a2=3 a3=0 items=0 ppid=1 pid=9425 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=42 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:01:28.229880 kernel: audit: type=1300 audit(1768874488.077:1088): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc116acd0 a2=3 a3=0 items=0 ppid=1 pid=9425 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=42 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:01:28.280434 kernel: audit: type=1327 audit(1768874488.077:1088): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:01:28.077000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:01:28.283404 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 20 02:01:28.298000 audit[9425]: USER_START pid=9425 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:28.311607 kernel: audit: type=1105 audit(1768874488.298:1089): pid=9425 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:28.463139 kernel: audit: type=1103 audit(1768874488.324:1090): pid=9428 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:28.324000 audit[9428]: CRED_ACQ pid=9428 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:29.021525 sshd[9428]: Connection closed by 10.0.0.1 port 33380 Jan 20 02:01:29.017146 sshd-session[9425]: pam_unix(sshd:session): session closed for user core Jan 20 02:01:29.035000 audit[9425]: USER_END pid=9425 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:29.057644 systemd[1]: sshd@41-10.0.0.44:22-10.0.0.1:33380.service: Deactivated successfully. Jan 20 02:01:29.089430 systemd[1]: session-42.scope: Deactivated successfully. Jan 20 02:01:29.106864 systemd-logind[1623]: Session 42 logged out. Waiting for processes to exit. Jan 20 02:01:29.138505 systemd-logind[1623]: Removed session 42. Jan 20 02:01:29.156849 kernel: audit: type=1106 audit(1768874489.035:1091): pid=9425 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:29.157007 kernel: audit: type=1104 audit(1768874489.035:1092): pid=9425 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:29.035000 audit[9425]: CRED_DISP pid=9425 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:29.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-10.0.0.44:22-10.0.0.1:33380 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:31.798876 kubelet[3123]: E0120 02:01:31.798106 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 02:01:32.830089 kubelet[3123]: E0120 02:01:32.827344 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 02:01:33.871262 kubelet[3123]: E0120 02:01:33.868419 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 02:01:33.945160 kubelet[3123]: E0120 02:01:33.942103 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 02:01:33.961868 kubelet[3123]: E0120 02:01:33.956655 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8bc549748-txp25" podUID="44944462-7130-49ee-b7c5-4cb73dea6058" Jan 20 02:01:34.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@42-10.0.0.44:22-10.0.0.1:33388 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:34.243916 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:01:34.245290 kernel: audit: type=1130 audit(1768874494.140:1094): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@42-10.0.0.44:22-10.0.0.1:33388 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:34.147279 systemd[1]: Started sshd@42-10.0.0.44:22-10.0.0.1:33388.service - OpenSSH per-connection server daemon (10.0.0.1:33388). Jan 20 02:01:34.953000 audit[9441]: USER_ACCT pid=9441 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:34.971240 sshd[9441]: Accepted publickey for core from 10.0.0.1 port 33388 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:01:35.103922 kernel: audit: type=1101 audit(1768874494.953:1095): pid=9441 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:35.106943 sshd-session[9441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:01:35.098000 audit[9441]: CRED_ACQ pid=9441 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:35.181854 systemd-logind[1623]: New session 43 of user core. Jan 20 02:01:35.241944 kernel: audit: type=1103 audit(1768874495.098:1096): pid=9441 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:35.242183 kernel: audit: type=1006 audit(1768874495.098:1097): pid=9441 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=43 res=1 Jan 20 02:01:35.098000 audit[9441]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe0b662610 a2=3 a3=0 items=0 ppid=1 pid=9441 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=43 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:01:35.409344 kernel: audit: type=1300 audit(1768874495.098:1097): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe0b662610 a2=3 a3=0 items=0 ppid=1 pid=9441 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=43 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:01:35.407922 systemd[1]: Started session-43.scope - Session 43 of User core. Jan 20 02:01:35.098000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:01:35.429325 kernel: audit: type=1327 audit(1768874495.098:1097): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:01:35.469000 audit[9441]: USER_START pid=9441 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:35.629274 kernel: audit: type=1105 audit(1768874495.469:1098): pid=9441 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:35.629451 kernel: audit: type=1103 audit(1768874495.603:1099): pid=9444 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:35.603000 audit[9444]: CRED_ACQ pid=9444 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:36.247817 kubelet[3123]: E0120 02:01:36.244930 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:01:36.784002 kubelet[3123]: E0120 02:01:36.783091 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 02:01:37.139200 sshd[9444]: Connection closed by 10.0.0.1 port 33388 Jan 20 02:01:37.140295 sshd-session[9441]: pam_unix(sshd:session): session closed for user core Jan 20 02:01:37.148000 audit[9441]: USER_END pid=9441 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:37.168332 systemd[1]: sshd@42-10.0.0.44:22-10.0.0.1:33388.service: Deactivated successfully. Jan 20 02:01:37.195198 systemd[1]: session-43.scope: Deactivated successfully. Jan 20 02:01:37.148000 audit[9441]: CRED_DISP pid=9441 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:37.238800 systemd-logind[1623]: Session 43 logged out. Waiting for processes to exit. Jan 20 02:01:37.249847 systemd-logind[1623]: Removed session 43. Jan 20 02:01:37.310478 kernel: audit: type=1106 audit(1768874497.148:1100): pid=9441 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:37.310647 kernel: audit: type=1104 audit(1768874497.148:1101): pid=9441 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:37.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@42-10.0.0.44:22-10.0.0.1:33388 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:42.190994 systemd[1]: Started sshd@43-10.0.0.44:22-10.0.0.1:47974.service - OpenSSH per-connection server daemon (10.0.0.1:47974). Jan 20 02:01:42.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@43-10.0.0.44:22-10.0.0.1:47974 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:42.215331 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:01:42.215484 kernel: audit: type=1130 audit(1768874502.189:1103): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@43-10.0.0.44:22-10.0.0.1:47974 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:42.624000 audit[9460]: USER_ACCT pid=9460 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:42.665067 sshd-session[9460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:01:42.672399 sshd[9460]: Accepted publickey for core from 10.0.0.1 port 47974 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:01:42.662000 audit[9460]: CRED_ACQ pid=9460 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:42.686920 systemd-logind[1623]: New session 44 of user core. Jan 20 02:01:42.753088 kernel: audit: type=1101 audit(1768874502.624:1104): pid=9460 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:42.753254 kernel: audit: type=1103 audit(1768874502.662:1105): pid=9460 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:42.791880 kernel: audit: type=1006 audit(1768874502.662:1106): pid=9460 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=44 res=1 Jan 20 02:01:42.792040 kernel: audit: type=1300 audit(1768874502.662:1106): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3b7c6300 a2=3 a3=0 items=0 ppid=1 pid=9460 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=44 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:01:42.662000 audit[9460]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3b7c6300 a2=3 a3=0 items=0 ppid=1 pid=9460 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=44 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:01:42.856976 kernel: audit: type=1327 audit(1768874502.662:1106): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:01:42.662000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:01:42.887018 systemd[1]: Started session-44.scope - Session 44 of User core. Jan 20 02:01:42.910000 audit[9460]: USER_START pid=9460 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:42.932000 audit[9463]: CRED_ACQ pid=9463 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:43.029229 kernel: audit: type=1105 audit(1768874502.910:1107): pid=9460 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:43.029415 kernel: audit: type=1103 audit(1768874502.932:1108): pid=9463 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:43.576236 sshd[9463]: Connection closed by 10.0.0.1 port 47974 Jan 20 02:01:43.579000 audit[9460]: USER_END pid=9460 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:43.580097 sshd-session[9460]: pam_unix(sshd:session): session closed for user core Jan 20 02:01:43.616522 systemd[1]: sshd@43-10.0.0.44:22-10.0.0.1:47974.service: Deactivated successfully. Jan 20 02:01:43.634288 systemd[1]: session-44.scope: Deactivated successfully. Jan 20 02:01:43.645165 systemd-logind[1623]: Session 44 logged out. Waiting for processes to exit. Jan 20 02:01:43.714464 kernel: audit: type=1106 audit(1768874503.579:1109): pid=9460 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:43.721993 kernel: audit: type=1104 audit(1768874503.579:1110): pid=9460 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:43.579000 audit[9460]: CRED_DISP pid=9460 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:43.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@43-10.0.0.44:22-10.0.0.1:47974 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:43.678593 systemd-logind[1623]: Removed session 44. Jan 20 02:01:43.780106 kubelet[3123]: E0120 02:01:43.777271 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 02:01:45.828534 kubelet[3123]: E0120 02:01:45.824943 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8bc549748-txp25" podUID="44944462-7130-49ee-b7c5-4cb73dea6058" Jan 20 02:01:45.828534 kubelet[3123]: E0120 02:01:45.825310 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 02:01:46.797791 kubelet[3123]: E0120 02:01:46.791112 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 02:01:46.817140 kubelet[3123]: E0120 02:01:46.814959 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 02:01:51.728521 containerd[1643]: time="2026-01-20T02:01:49.051458580Z" level=info msg="container event discarded" container=c9bf082ce41898c53b55b552a810d7329bff7ef745b6d94536608a0ea0f3df40 type=CONTAINER_CREATED_EVENT Jan 20 02:01:54.308019 systemd[1]: Started sshd@44-10.0.0.44:22-10.0.0.1:54704.service - OpenSSH per-connection server daemon (10.0.0.1:54704). Jan 20 02:01:55.065245 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:01:55.065629 kernel: audit: type=1130 audit(1768874514.306:1112): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@44-10.0.0.44:22-10.0.0.1:54704 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:54.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@44-10.0.0.44:22-10.0.0.1:54704 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:56.155327 kubelet[3123]: E0120 02:01:56.155275 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:01:56.195243 kubelet[3123]: E0120 02:01:56.194952 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 02:01:56.453000 audit[9479]: USER_ACCT pid=9479 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:56.460814 sshd[9479]: Accepted publickey for core from 10.0.0.1 port 54704 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:01:56.475243 sshd-session[9479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:01:56.530242 kernel: audit: type=1101 audit(1768874516.453:1113): pid=9479 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:56.471000 audit[9479]: CRED_ACQ pid=9479 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:56.589967 kernel: audit: type=1103 audit(1768874516.471:1114): pid=9479 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:56.694400 kernel: audit: type=1006 audit(1768874516.471:1115): pid=9479 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=45 res=1 Jan 20 02:01:56.703491 kernel: audit: type=1300 audit(1768874516.471:1115): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb1db4840 a2=3 a3=0 items=0 ppid=1 pid=9479 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=45 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:01:56.471000 audit[9479]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb1db4840 a2=3 a3=0 items=0 ppid=1 pid=9479 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=45 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:01:56.663129 systemd-logind[1623]: New session 45 of user core. Jan 20 02:01:56.736894 kernel: audit: type=1327 audit(1768874516.471:1115): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:01:56.471000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:01:56.742999 systemd[1]: Started session-45.scope - Session 45 of User core. Jan 20 02:01:56.800038 containerd[1643]: time="2026-01-20T02:01:56.799924523Z" level=info msg="container event discarded" container=c9bf082ce41898c53b55b552a810d7329bff7ef745b6d94536608a0ea0f3df40 type=CONTAINER_STARTED_EVENT Jan 20 02:01:56.913000 audit[9479]: USER_START pid=9479 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:56.978803 kernel: audit: type=1105 audit(1768874516.913:1116): pid=9479 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:56.957000 audit[9501]: CRED_ACQ pid=9501 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:57.037511 kernel: audit: type=1103 audit(1768874516.957:1117): pid=9501 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:09.833266 kubelet[3123]: E0120 02:02:09.833166 3123 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="11.791s" Jan 20 02:02:09.891028 kubelet[3123]: E0120 02:02:09.873313 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 02:02:10.141207 kubelet[3123]: E0120 02:02:10.131834 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:10.211424 kubelet[3123]: E0120 02:02:10.207878 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 02:02:10.246917 kubelet[3123]: E0120 02:02:10.232967 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8bc549748-txp25" podUID="44944462-7130-49ee-b7c5-4cb73dea6058" Jan 20 02:02:10.246917 kubelet[3123]: E0120 02:02:10.233122 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 02:02:10.442567 kubelet[3123]: E0120 02:02:10.438230 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 02:02:10.442994 sshd[9501]: Connection closed by 10.0.0.1 port 54704 Jan 20 02:02:10.454999 sshd-session[9479]: pam_unix(sshd:session): session closed for user core Jan 20 02:02:10.564000 audit[9479]: USER_END pid=9479 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:10.755617 systemd[1]: sshd@44-10.0.0.44:22-10.0.0.1:54704.service: Deactivated successfully. Jan 20 02:02:10.792659 kernel: audit: type=1106 audit(1768874530.564:1118): pid=9479 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:10.792976 kernel: audit: type=1104 audit(1768874530.571:1119): pid=9479 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:10.571000 audit[9479]: CRED_DISP pid=9479 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:10.816024 systemd[1]: session-45.scope: Deactivated successfully. Jan 20 02:02:10.846019 systemd[1]: session-45.scope: Consumed 1.409s CPU time, 15.8M memory peak. Jan 20 02:02:10.884999 kernel: audit: type=1131 audit(1768874530.755:1120): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@44-10.0.0.44:22-10.0.0.1:54704 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:10.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@44-10.0.0.44:22-10.0.0.1:54704 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:10.909358 systemd-logind[1623]: Session 45 logged out. Waiting for processes to exit. Jan 20 02:02:10.931227 systemd-logind[1623]: Removed session 45. Jan 20 02:02:10.974620 containerd[1643]: time="2026-01-20T02:02:10.973563790Z" level=error msg="ExecSync for \"c9bf082ce41898c53b55b552a810d7329bff7ef745b6d94536608a0ea0f3df40\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Jan 20 02:02:10.979558 kubelet[3123]: E0120 02:02:10.976625 3123 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="c9bf082ce41898c53b55b552a810d7329bff7ef745b6d94536608a0ea0f3df40" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Jan 20 02:02:11.769261 kubelet[3123]: E0120 02:02:11.769118 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:11.822845 kubelet[3123]: E0120 02:02:11.816896 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 02:02:15.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@45-10.0.0.44:22-10.0.0.1:34992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:15.551575 systemd[1]: Started sshd@45-10.0.0.44:22-10.0.0.1:34992.service - OpenSSH per-connection server daemon (10.0.0.1:34992). Jan 20 02:02:15.624034 kernel: audit: type=1130 audit(1768874535.548:1121): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@45-10.0.0.44:22-10.0.0.1:34992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:16.127000 audit[9517]: USER_ACCT pid=9517 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:16.130285 sshd[9517]: Accepted publickey for core from 10.0.0.1 port 34992 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:02:16.137785 sshd-session[9517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:02:16.200588 systemd-logind[1623]: New session 46 of user core. Jan 20 02:02:16.134000 audit[9517]: CRED_ACQ pid=9517 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:16.276206 kernel: audit: type=1101 audit(1768874536.127:1122): pid=9517 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:16.276398 kernel: audit: type=1103 audit(1768874536.134:1123): pid=9517 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:16.304874 kernel: audit: type=1006 audit(1768874536.134:1124): pid=9517 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=46 res=1 Jan 20 02:02:16.413943 kernel: audit: type=1300 audit(1768874536.134:1124): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd229c0f20 a2=3 a3=0 items=0 ppid=1 pid=9517 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=46 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:16.134000 audit[9517]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd229c0f20 a2=3 a3=0 items=0 ppid=1 pid=9517 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=46 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:16.426766 kernel: audit: type=1327 audit(1768874536.134:1124): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:02:16.134000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:02:16.428631 systemd[1]: Started session-46.scope - Session 46 of User core. Jan 20 02:02:16.530000 audit[9517]: USER_START pid=9517 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:16.619401 kernel: audit: type=1105 audit(1768874536.530:1125): pid=9517 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:16.637000 audit[9520]: CRED_ACQ pid=9520 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:16.716994 kernel: audit: type=1103 audit(1768874536.637:1126): pid=9520 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:18.115142 sshd[9520]: Connection closed by 10.0.0.1 port 34992 Jan 20 02:02:18.135052 sshd-session[9517]: pam_unix(sshd:session): session closed for user core Jan 20 02:02:18.160000 audit[9517]: USER_END pid=9517 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:18.265848 systemd[1]: sshd@45-10.0.0.44:22-10.0.0.1:34992.service: Deactivated successfully. Jan 20 02:02:18.301329 systemd[1]: session-46.scope: Deactivated successfully. Jan 20 02:02:18.333311 kernel: audit: type=1106 audit(1768874538.160:1127): pid=9517 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:18.333475 kernel: audit: type=1104 audit(1768874538.233:1128): pid=9517 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:18.233000 audit[9517]: CRED_DISP pid=9517 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:18.385352 kernel: audit: type=1131 audit(1768874538.273:1129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@45-10.0.0.44:22-10.0.0.1:34992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:18.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@45-10.0.0.44:22-10.0.0.1:34992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:18.381652 systemd-logind[1623]: Session 46 logged out. Waiting for processes to exit. Jan 20 02:02:18.435882 systemd-logind[1623]: Removed session 46. Jan 20 02:02:21.966394 kubelet[3123]: E0120 02:02:21.951585 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 02:02:23.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@46-10.0.0.44:22-10.0.0.1:35004 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:23.256875 systemd[1]: Started sshd@46-10.0.0.44:22-10.0.0.1:35004.service - OpenSSH per-connection server daemon (10.0.0.1:35004). Jan 20 02:02:23.289189 kernel: audit: type=1130 audit(1768874543.255:1130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@46-10.0.0.44:22-10.0.0.1:35004 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:23.742663 sshd[9559]: Accepted publickey for core from 10.0.0.1 port 35004 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:02:23.741000 audit[9559]: USER_ACCT pid=9559 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:23.756460 sshd-session[9559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:02:23.771443 kernel: audit: type=1101 audit(1768874543.741:1131): pid=9559 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:23.775281 systemd-logind[1623]: New session 47 of user core. Jan 20 02:02:23.754000 audit[9559]: CRED_ACQ pid=9559 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:23.802228 kubelet[3123]: E0120 02:02:23.798145 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 02:02:23.818574 kernel: audit: type=1103 audit(1768874543.754:1132): pid=9559 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:23.818821 kernel: audit: type=1006 audit(1768874543.754:1133): pid=9559 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=47 res=1 Jan 20 02:02:23.754000 audit[9559]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe543144a0 a2=3 a3=0 items=0 ppid=1 pid=9559 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=47 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:23.754000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:02:23.857521 kernel: audit: type=1300 audit(1768874543.754:1133): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe543144a0 a2=3 a3=0 items=0 ppid=1 pid=9559 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=47 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:23.857752 kernel: audit: type=1327 audit(1768874543.754:1133): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:02:23.862905 systemd[1]: Started session-47.scope - Session 47 of User core. Jan 20 02:02:23.878000 audit[9559]: USER_START pid=9559 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:23.935361 kernel: audit: type=1105 audit(1768874543.878:1134): pid=9559 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:23.894000 audit[9562]: CRED_ACQ pid=9562 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:24.002026 kernel: audit: type=1103 audit(1768874543.894:1135): pid=9562 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:24.776558 kubelet[3123]: E0120 02:02:24.774430 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:24.793308 kubelet[3123]: E0120 02:02:24.791496 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:24.833552 kubelet[3123]: E0120 02:02:24.826251 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 02:02:24.833552 kubelet[3123]: E0120 02:02:24.826895 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 02:02:24.850362 kubelet[3123]: E0120 02:02:24.843093 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8bc549748-txp25" podUID="44944462-7130-49ee-b7c5-4cb73dea6058" Jan 20 02:02:25.206412 sshd[9562]: Connection closed by 10.0.0.1 port 35004 Jan 20 02:02:25.209104 sshd-session[9559]: pam_unix(sshd:session): session closed for user core Jan 20 02:02:25.214000 audit[9559]: USER_END pid=9559 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:25.241259 systemd[1]: sshd@46-10.0.0.44:22-10.0.0.1:35004.service: Deactivated successfully. Jan 20 02:02:25.244845 systemd-logind[1623]: Session 47 logged out. Waiting for processes to exit. Jan 20 02:02:25.214000 audit[9559]: CRED_DISP pid=9559 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:25.276989 systemd[1]: session-47.scope: Deactivated successfully. Jan 20 02:02:25.299279 systemd-logind[1623]: Removed session 47. Jan 20 02:02:25.331133 kernel: audit: type=1106 audit(1768874545.214:1136): pid=9559 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:25.331358 kernel: audit: type=1104 audit(1768874545.214:1137): pid=9559 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:25.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@46-10.0.0.44:22-10.0.0.1:35004 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:25.809494 kubelet[3123]: E0120 02:02:25.797756 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 02:02:30.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@47-10.0.0.44:22-10.0.0.1:36842 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:30.316058 systemd[1]: Started sshd@47-10.0.0.44:22-10.0.0.1:36842.service - OpenSSH per-connection server daemon (10.0.0.1:36842). Jan 20 02:02:30.352796 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:02:30.352979 kernel: audit: type=1130 audit(1768874550.315:1139): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@47-10.0.0.44:22-10.0.0.1:36842 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:30.703414 sshd[9575]: Accepted publickey for core from 10.0.0.1 port 36842 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:02:30.701000 audit[9575]: USER_ACCT pid=9575 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:30.715307 sshd-session[9575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:02:30.707000 audit[9575]: CRED_ACQ pid=9575 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:30.826480 systemd-logind[1623]: New session 48 of user core. Jan 20 02:02:30.831605 kernel: audit: type=1101 audit(1768874550.701:1140): pid=9575 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:30.831741 kernel: audit: type=1103 audit(1768874550.707:1141): pid=9575 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:30.831966 kernel: audit: type=1006 audit(1768874550.707:1142): pid=9575 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=48 res=1 Jan 20 02:02:30.707000 audit[9575]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb0166400 a2=3 a3=0 items=0 ppid=1 pid=9575 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=48 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:30.900054 kernel: audit: type=1300 audit(1768874550.707:1142): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb0166400 a2=3 a3=0 items=0 ppid=1 pid=9575 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=48 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:30.900787 kernel: audit: type=1327 audit(1768874550.707:1142): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:02:30.707000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:02:30.892272 systemd[1]: Started session-48.scope - Session 48 of User core. Jan 20 02:02:30.920000 audit[9575]: USER_START pid=9575 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:30.976423 kernel: audit: type=1105 audit(1768874550.920:1143): pid=9575 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:30.976592 kernel: audit: type=1103 audit(1768874550.928:1144): pid=9578 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:30.928000 audit[9578]: CRED_ACQ pid=9578 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:31.623581 sshd[9578]: Connection closed by 10.0.0.1 port 36842 Jan 20 02:02:31.630538 sshd-session[9575]: pam_unix(sshd:session): session closed for user core Jan 20 02:02:31.741277 kernel: audit: type=1106 audit(1768874551.663:1145): pid=9575 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:31.663000 audit[9575]: USER_END pid=9575 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:31.663000 audit[9575]: CRED_DISP pid=9575 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:31.775136 systemd[1]: sshd@47-10.0.0.44:22-10.0.0.1:36842.service: Deactivated successfully. Jan 20 02:02:31.803584 systemd[1]: session-48.scope: Deactivated successfully. Jan 20 02:02:31.822793 kernel: audit: type=1104 audit(1768874551.663:1146): pid=9575 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:31.824479 systemd-logind[1623]: Session 48 logged out. Waiting for processes to exit. Jan 20 02:02:31.829196 systemd-logind[1623]: Removed session 48. Jan 20 02:02:31.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@47-10.0.0.44:22-10.0.0.1:36842 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:33.776769 kubelet[3123]: E0120 02:02:33.776594 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:35.834504 kubelet[3123]: E0120 02:02:35.834118 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 02:02:36.674460 systemd[1]: Started sshd@48-10.0.0.44:22-10.0.0.1:40698.service - OpenSSH per-connection server daemon (10.0.0.1:40698). Jan 20 02:02:36.708242 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:02:36.708400 kernel: audit: type=1130 audit(1768874556.694:1148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@48-10.0.0.44:22-10.0.0.1:40698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:36.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@48-10.0.0.44:22-10.0.0.1:40698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:36.802606 kubelet[3123]: E0120 02:02:36.802550 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 02:02:36.803083 kubelet[3123]: E0120 02:02:36.802998 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 02:02:37.219805 sshd[9591]: Accepted publickey for core from 10.0.0.1 port 40698 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:02:37.217000 audit[9591]: USER_ACCT pid=9591 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:37.235399 sshd-session[9591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:02:37.231000 audit[9591]: CRED_ACQ pid=9591 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:37.329814 systemd-logind[1623]: New session 49 of user core. Jan 20 02:02:37.371404 kernel: audit: type=1101 audit(1768874557.217:1149): pid=9591 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:37.371606 kernel: audit: type=1103 audit(1768874557.231:1150): pid=9591 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:37.371782 kernel: audit: type=1006 audit(1768874557.231:1151): pid=9591 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=49 res=1 Jan 20 02:02:37.231000 audit[9591]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff8a24e420 a2=3 a3=0 items=0 ppid=1 pid=9591 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=49 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:37.457031 kernel: audit: type=1300 audit(1768874557.231:1151): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff8a24e420 a2=3 a3=0 items=0 ppid=1 pid=9591 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=49 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:37.231000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:02:37.482901 systemd[1]: Started session-49.scope - Session 49 of User core. Jan 20 02:02:37.499373 kernel: audit: type=1327 audit(1768874557.231:1151): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:02:37.518000 audit[9591]: USER_START pid=9591 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:37.597335 kernel: audit: type=1105 audit(1768874557.518:1152): pid=9591 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:37.535000 audit[9594]: CRED_ACQ pid=9594 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:37.657108 kernel: audit: type=1103 audit(1768874557.535:1153): pid=9594 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:37.806170 kubelet[3123]: E0120 02:02:37.798810 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 02:02:37.806170 kubelet[3123]: E0120 02:02:37.799940 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8bc549748-txp25" podUID="44944462-7130-49ee-b7c5-4cb73dea6058" Jan 20 02:02:37.820816 kubelet[3123]: E0120 02:02:37.820519 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 02:02:38.307311 sshd[9594]: Connection closed by 10.0.0.1 port 40698 Jan 20 02:02:38.316222 sshd-session[9591]: pam_unix(sshd:session): session closed for user core Jan 20 02:02:38.311000 audit[9591]: USER_END pid=9591 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:38.311000 audit[9591]: CRED_DISP pid=9591 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:38.431335 systemd[1]: sshd@48-10.0.0.44:22-10.0.0.1:40698.service: Deactivated successfully. Jan 20 02:02:38.460218 systemd[1]: session-49.scope: Deactivated successfully. Jan 20 02:02:38.473784 kernel: audit: type=1106 audit(1768874558.311:1154): pid=9591 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:38.473925 kernel: audit: type=1104 audit(1768874558.311:1155): pid=9591 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:38.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@48-10.0.0.44:22-10.0.0.1:40698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:38.515925 systemd-logind[1623]: Session 49 logged out. Waiting for processes to exit. Jan 20 02:02:38.531235 systemd-logind[1623]: Removed session 49. Jan 20 02:02:40.793072 kubelet[3123]: E0120 02:02:40.767762 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:43.427896 systemd[1]: Started sshd@49-10.0.0.44:22-10.0.0.1:40710.service - OpenSSH per-connection server daemon (10.0.0.1:40710). Jan 20 02:02:43.464789 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:02:43.464938 kernel: audit: type=1130 audit(1768874563.440:1157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@49-10.0.0.44:22-10.0.0.1:40710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:43.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@49-10.0.0.44:22-10.0.0.1:40710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:44.044214 sshd[9611]: Accepted publickey for core from 10.0.0.1 port 40710 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:02:44.026000 audit[9611]: USER_ACCT pid=9611 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:44.056532 sshd-session[9611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:02:44.041000 audit[9611]: CRED_ACQ pid=9611 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:44.234140 kernel: audit: type=1101 audit(1768874564.026:1158): pid=9611 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:44.234261 kernel: audit: type=1103 audit(1768874564.041:1159): pid=9611 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:44.234331 kernel: audit: type=1006 audit(1768874564.054:1160): pid=9611 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=50 res=1 Jan 20 02:02:44.256047 systemd-logind[1623]: New session 50 of user core. Jan 20 02:02:44.054000 audit[9611]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffdba528d0 a2=3 a3=0 items=0 ppid=1 pid=9611 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=50 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:44.054000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:02:44.371187 kernel: audit: type=1300 audit(1768874564.054:1160): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffdba528d0 a2=3 a3=0 items=0 ppid=1 pid=9611 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=50 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:44.371334 kernel: audit: type=1327 audit(1768874564.054:1160): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:02:44.372416 systemd[1]: Started session-50.scope - Session 50 of User core. Jan 20 02:02:44.401000 audit[9611]: USER_START pid=9611 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:44.408000 audit[9614]: CRED_ACQ pid=9614 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:44.454179 kernel: audit: type=1105 audit(1768874564.401:1161): pid=9611 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:44.454471 kernel: audit: type=1103 audit(1768874564.408:1162): pid=9614 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:45.215112 sshd[9614]: Connection closed by 10.0.0.1 port 40710 Jan 20 02:02:45.216166 sshd-session[9611]: pam_unix(sshd:session): session closed for user core Jan 20 02:02:45.232000 audit[9611]: USER_END pid=9611 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:45.260286 systemd[1]: sshd@49-10.0.0.44:22-10.0.0.1:40710.service: Deactivated successfully. Jan 20 02:02:45.282283 systemd[1]: session-50.scope: Deactivated successfully. Jan 20 02:02:45.300055 systemd-logind[1623]: Session 50 logged out. Waiting for processes to exit. Jan 20 02:02:45.308794 systemd-logind[1623]: Removed session 50. Jan 20 02:02:45.327550 kernel: audit: type=1106 audit(1768874565.232:1163): pid=9611 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:45.327780 kernel: audit: type=1104 audit(1768874565.232:1164): pid=9611 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:45.232000 audit[9611]: CRED_DISP pid=9611 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:45.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@49-10.0.0.44:22-10.0.0.1:40710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:48.796542 kubelet[3123]: E0120 02:02:48.789390 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 02:02:48.796542 kubelet[3123]: E0120 02:02:48.790030 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 02:02:48.803432 kubelet[3123]: E0120 02:02:48.803351 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 02:02:48.812080 kubelet[3123]: E0120 02:02:48.811836 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 02:02:49.719611 containerd[1643]: time="2026-01-20T02:02:49.714777434Z" level=info msg="container event discarded" container=034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a type=CONTAINER_CREATED_EVENT Jan 20 02:02:49.719611 containerd[1643]: time="2026-01-20T02:02:49.714831689Z" level=info msg="container event discarded" container=034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a type=CONTAINER_STARTED_EVENT Jan 20 02:02:50.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@50-10.0.0.44:22-10.0.0.1:34920 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:50.349868 systemd[1]: Started sshd@50-10.0.0.44:22-10.0.0.1:34920.service - OpenSSH per-connection server daemon (10.0.0.1:34920). Jan 20 02:02:50.361761 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:02:50.361941 kernel: audit: type=1130 audit(1768874570.352:1166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@50-10.0.0.44:22-10.0.0.1:34920 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:50.634000 audit[9655]: USER_ACCT pid=9655 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:50.643867 sshd[9655]: Accepted publickey for core from 10.0.0.1 port 34920 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:02:50.656761 kernel: audit: type=1101 audit(1768874570.634:1167): pid=9655 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:50.656918 kernel: audit: type=1103 audit(1768874570.649:1168): pid=9655 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:50.649000 audit[9655]: CRED_ACQ pid=9655 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:50.656816 sshd-session[9655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:02:50.707950 kernel: audit: type=1006 audit(1768874570.650:1169): pid=9655 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=51 res=1 Jan 20 02:02:50.704950 systemd-logind[1623]: New session 51 of user core. Jan 20 02:02:50.650000 audit[9655]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc4c477070 a2=3 a3=0 items=0 ppid=1 pid=9655 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=51 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:50.650000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:02:50.752072 kernel: audit: type=1300 audit(1768874570.650:1169): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc4c477070 a2=3 a3=0 items=0 ppid=1 pid=9655 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=51 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:50.754794 kernel: audit: type=1327 audit(1768874570.650:1169): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:02:50.761892 containerd[1643]: time="2026-01-20T02:02:50.759339558Z" level=info msg="container event discarded" container=181b3a96c3aa5da8383818d6b70dcc2190177b935aadc869120b1f462947a790 type=CONTAINER_CREATED_EVENT Jan 20 02:02:50.761892 containerd[1643]: time="2026-01-20T02:02:50.759407678Z" level=info msg="container event discarded" container=181b3a96c3aa5da8383818d6b70dcc2190177b935aadc869120b1f462947a790 type=CONTAINER_STARTED_EVENT Jan 20 02:02:50.760885 systemd[1]: Started session-51.scope - Session 51 of User core. Jan 20 02:02:50.791291 kubelet[3123]: E0120 02:02:50.781933 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 02:02:50.805000 audit[9655]: USER_START pid=9655 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:50.859343 kernel: audit: type=1105 audit(1768874570.805:1170): pid=9655 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:50.859567 kernel: audit: type=1103 audit(1768874570.838:1171): pid=9658 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:50.838000 audit[9658]: CRED_ACQ pid=9658 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:51.474888 sshd[9658]: Connection closed by 10.0.0.1 port 34920 Jan 20 02:02:51.484627 sshd-session[9655]: pam_unix(sshd:session): session closed for user core Jan 20 02:02:51.488000 audit[9655]: USER_END pid=9655 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:51.508014 systemd[1]: sshd@50-10.0.0.44:22-10.0.0.1:34920.service: Deactivated successfully. Jan 20 02:02:51.524117 systemd[1]: session-51.scope: Deactivated successfully. Jan 20 02:02:51.544750 systemd-logind[1623]: Session 51 logged out. Waiting for processes to exit. Jan 20 02:02:51.558810 kernel: audit: type=1106 audit(1768874571.488:1172): pid=9655 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:51.559754 systemd-logind[1623]: Removed session 51. Jan 20 02:02:51.488000 audit[9655]: CRED_DISP pid=9655 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:51.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@50-10.0.0.44:22-10.0.0.1:34920 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:51.628402 kernel: audit: type=1104 audit(1768874571.488:1173): pid=9655 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:51.869780 kubelet[3123]: E0120 02:02:51.869550 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8bc549748-txp25" podUID="44944462-7130-49ee-b7c5-4cb73dea6058" Jan 20 02:02:53.657849 containerd[1643]: time="2026-01-20T02:02:53.656165403Z" level=info msg="container event discarded" container=fb8262e56d58beecf0cdfc1592bac22004c60b9d161705a8d440db0638307487 type=CONTAINER_CREATED_EVENT Jan 20 02:02:53.657849 containerd[1643]: time="2026-01-20T02:02:53.656282432Z" level=info msg="container event discarded" container=fb8262e56d58beecf0cdfc1592bac22004c60b9d161705a8d440db0638307487 type=CONTAINER_STARTED_EVENT Jan 20 02:02:56.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@51-10.0.0.44:22-10.0.0.1:37746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:56.603248 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:02:56.603393 kernel: audit: type=1130 audit(1768874576.593:1175): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@51-10.0.0.44:22-10.0.0.1:37746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:56.594563 systemd[1]: Started sshd@51-10.0.0.44:22-10.0.0.1:37746.service - OpenSSH per-connection server daemon (10.0.0.1:37746). Jan 20 02:02:56.824000 audit[9691]: USER_ACCT pid=9691 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:56.849196 sshd[9691]: Accepted publickey for core from 10.0.0.1 port 37746 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:02:56.873586 sshd-session[9691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:02:56.890804 kernel: audit: type=1101 audit(1768874576.824:1176): pid=9691 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:56.891015 kernel: audit: type=1103 audit(1768874576.860:1177): pid=9691 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:56.860000 audit[9691]: CRED_ACQ pid=9691 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:56.926762 systemd-logind[1623]: New session 52 of user core. Jan 20 02:02:56.963774 kernel: audit: type=1006 audit(1768874576.860:1178): pid=9691 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=52 res=1 Jan 20 02:02:56.860000 audit[9691]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1ced4010 a2=3 a3=0 items=0 ppid=1 pid=9691 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=52 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:56.860000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:02:57.097006 kernel: audit: type=1300 audit(1768874576.860:1178): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1ced4010 a2=3 a3=0 items=0 ppid=1 pid=9691 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=52 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:57.097252 kernel: audit: type=1327 audit(1768874576.860:1178): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:02:57.093114 systemd[1]: Started session-52.scope - Session 52 of User core. Jan 20 02:02:57.141000 audit[9691]: USER_START pid=9691 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:57.242444 kernel: audit: type=1105 audit(1768874577.141:1179): pid=9691 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:57.249000 audit[9694]: CRED_ACQ pid=9694 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:57.319039 kernel: audit: type=1103 audit(1768874577.249:1180): pid=9694 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:57.926221 containerd[1643]: time="2026-01-20T02:02:57.923506853Z" level=info msg="container event discarded" container=93237f0bd8b47aa75fe768c371bd0dae4b9a89687874789059ac2ac53a6d1fff type=CONTAINER_CREATED_EVENT Jan 20 02:02:57.926221 containerd[1643]: time="2026-01-20T02:02:57.923600741Z" level=info msg="container event discarded" container=93237f0bd8b47aa75fe768c371bd0dae4b9a89687874789059ac2ac53a6d1fff type=CONTAINER_STARTED_EVENT Jan 20 02:02:58.838363 sshd[9694]: Connection closed by 10.0.0.1 port 37746 Jan 20 02:02:58.828634 sshd-session[9691]: pam_unix(sshd:session): session closed for user core Jan 20 02:02:58.850000 audit[9691]: USER_END pid=9691 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:58.915169 kernel: audit: type=1106 audit(1768874578.850:1181): pid=9691 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:58.915570 kernel: audit: type=1104 audit(1768874578.880:1182): pid=9691 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:58.880000 audit[9691]: CRED_DISP pid=9691 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:58.955012 systemd[1]: sshd@51-10.0.0.44:22-10.0.0.1:37746.service: Deactivated successfully. Jan 20 02:02:58.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@51-10.0.0.44:22-10.0.0.1:37746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:59.002492 systemd[1]: session-52.scope: Deactivated successfully. Jan 20 02:02:59.035827 systemd-logind[1623]: Session 52 logged out. Waiting for processes to exit. Jan 20 02:02:59.037964 systemd-logind[1623]: Removed session 52. Jan 20 02:02:59.789251 kubelet[3123]: E0120 02:02:59.777205 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:03:00.773547 kubelet[3123]: E0120 02:03:00.773145 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 02:03:01.781247 kubelet[3123]: E0120 02:03:01.781193 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 02:03:01.790926 kubelet[3123]: E0120 02:03:01.790405 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 02:03:02.511670 containerd[1643]: time="2026-01-20T02:03:02.502813011Z" level=info msg="container event discarded" container=ac926cfb8ee86f652147eaf64a89610b58c726122cd9f4dc5c3509989f4585ad type=CONTAINER_CREATED_EVENT Jan 20 02:03:02.798104 kubelet[3123]: E0120 02:03:02.785055 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 02:03:02.859259 containerd[1643]: time="2026-01-20T02:03:02.859081836Z" level=info msg="container event discarded" container=9924a9f6e0142e7d636b5e4dbbd5e00977df39e1f29b5745b9f36d60dc1694d2 type=CONTAINER_CREATED_EVENT Jan 20 02:03:02.859259 containerd[1643]: time="2026-01-20T02:03:02.859161708Z" level=info msg="container event discarded" container=9924a9f6e0142e7d636b5e4dbbd5e00977df39e1f29b5745b9f36d60dc1694d2 type=CONTAINER_STARTED_EVENT Jan 20 02:03:03.414343 containerd[1643]: time="2026-01-20T02:03:03.413897125Z" level=info msg="container event discarded" container=42d81007ed907ed479e5212fabce9ae0011295ec42aa72306d447fb8bf1ece3f type=CONTAINER_CREATED_EVENT Jan 20 02:03:03.420181 containerd[1643]: time="2026-01-20T02:03:03.419624341Z" level=info msg="container event discarded" container=42d81007ed907ed479e5212fabce9ae0011295ec42aa72306d447fb8bf1ece3f type=CONTAINER_STARTED_EVENT Jan 20 02:03:03.991435 systemd[1]: Started sshd@52-10.0.0.44:22-10.0.0.1:37754.service - OpenSSH per-connection server daemon (10.0.0.1:37754). Jan 20 02:03:03.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@52-10.0.0.44:22-10.0.0.1:37754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:04.007880 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:03:04.008029 kernel: audit: type=1130 audit(1768874583.992:1184): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@52-10.0.0.44:22-10.0.0.1:37754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:04.688578 kernel: audit: type=1101 audit(1768874584.605:1185): pid=9709 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:04.605000 audit[9709]: USER_ACCT pid=9709 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:04.627146 sshd-session[9709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:03:04.692614 sshd[9709]: Accepted publickey for core from 10.0.0.1 port 37754 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:03:04.625000 audit[9709]: CRED_ACQ pid=9709 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:04.765573 kernel: audit: type=1103 audit(1768874584.625:1186): pid=9709 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:04.753191 systemd-logind[1623]: New session 53 of user core. Jan 20 02:03:04.797965 systemd[1]: Started session-53.scope - Session 53 of User core. Jan 20 02:03:04.883299 kernel: audit: type=1006 audit(1768874584.625:1187): pid=9709 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=53 res=1 Jan 20 02:03:04.883457 kernel: audit: type=1300 audit(1768874584.625:1187): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff54220e80 a2=3 a3=0 items=0 ppid=1 pid=9709 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=53 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:04.625000 audit[9709]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff54220e80 a2=3 a3=0 items=0 ppid=1 pid=9709 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=53 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:04.883661 kubelet[3123]: E0120 02:03:04.813130 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8bc549748-txp25" podUID="44944462-7130-49ee-b7c5-4cb73dea6058" Jan 20 02:03:04.883661 kubelet[3123]: E0120 02:03:04.803667 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 02:03:04.993338 kernel: audit: type=1327 audit(1768874584.625:1187): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:03:04.993497 kernel: audit: type=1105 audit(1768874584.876:1188): pid=9709 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:04.993540 kernel: audit: type=1103 audit(1768874584.922:1189): pid=9712 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:04.625000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:03:04.876000 audit[9709]: USER_START pid=9709 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:04.922000 audit[9712]: CRED_ACQ pid=9712 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:05.516411 sshd[9712]: Connection closed by 10.0.0.1 port 37754 Jan 20 02:03:05.517076 sshd-session[9709]: pam_unix(sshd:session): session closed for user core Jan 20 02:03:05.529000 audit[9709]: USER_END pid=9709 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:05.593785 kernel: audit: type=1106 audit(1768874585.529:1190): pid=9709 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:05.593996 kernel: audit: type=1104 audit(1768874585.530:1191): pid=9709 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:05.530000 audit[9709]: CRED_DISP pid=9709 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:05.650129 systemd[1]: sshd@52-10.0.0.44:22-10.0.0.1:37754.service: Deactivated successfully. Jan 20 02:03:05.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@52-10.0.0.44:22-10.0.0.1:37754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:05.671897 systemd[1]: session-53.scope: Deactivated successfully. Jan 20 02:03:05.686880 systemd-logind[1623]: Session 53 logged out. Waiting for processes to exit. Jan 20 02:03:05.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@53-10.0.0.44:22-10.0.0.1:60140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:05.699100 systemd[1]: Started sshd@53-10.0.0.44:22-10.0.0.1:60140.service - OpenSSH per-connection server daemon (10.0.0.1:60140). Jan 20 02:03:05.713605 systemd-logind[1623]: Removed session 53. Jan 20 02:03:06.143943 sshd[9726]: Accepted publickey for core from 10.0.0.1 port 60140 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:03:06.142000 audit[9726]: USER_ACCT pid=9726 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:06.150000 audit[9726]: CRED_ACQ pid=9726 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:06.150000 audit[9726]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffb3469f60 a2=3 a3=0 items=0 ppid=1 pid=9726 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=54 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:06.150000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:03:06.162095 sshd-session[9726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:03:06.194219 containerd[1643]: time="2026-01-20T02:03:06.193963465Z" level=info msg="container event discarded" container=fa502b3c3b62c8a0cba0636ff66480a94da6ccb8ab38b54e4096d05aa4bf3e91 type=CONTAINER_CREATED_EVENT Jan 20 02:03:06.250219 systemd-logind[1623]: New session 54 of user core. Jan 20 02:03:06.270144 systemd[1]: Started session-54.scope - Session 54 of User core. Jan 20 02:03:06.315000 audit[9726]: USER_START pid=9726 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:06.330000 audit[9729]: CRED_ACQ pid=9729 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:06.368404 containerd[1643]: time="2026-01-20T02:03:06.365294864Z" level=info msg="container event discarded" container=034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a type=CONTAINER_STOPPED_EVENT Jan 20 02:03:07.599776 containerd[1643]: time="2026-01-20T02:03:07.599497183Z" level=info msg="container event discarded" container=ac926cfb8ee86f652147eaf64a89610b58c726122cd9f4dc5c3509989f4585ad type=CONTAINER_STARTED_EVENT Jan 20 02:03:08.383660 sshd[9729]: Connection closed by 10.0.0.1 port 60140 Jan 20 02:03:08.384409 sshd-session[9726]: pam_unix(sshd:session): session closed for user core Jan 20 02:03:08.409000 audit[9726]: USER_END pid=9726 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:08.434000 audit[9726]: CRED_DISP pid=9726 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:08.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@54-10.0.0.44:22-10.0.0.1:60148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:08.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@53-10.0.0.44:22-10.0.0.1:60140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:08.502208 systemd[1]: Started sshd@54-10.0.0.44:22-10.0.0.1:60148.service - OpenSSH per-connection server daemon (10.0.0.1:60148). Jan 20 02:03:08.503406 systemd[1]: sshd@53-10.0.0.44:22-10.0.0.1:60140.service: Deactivated successfully. Jan 20 02:03:08.515134 systemd[1]: session-54.scope: Deactivated successfully. Jan 20 02:03:08.549162 systemd-logind[1623]: Session 54 logged out. Waiting for processes to exit. Jan 20 02:03:08.568369 systemd-logind[1623]: Removed session 54. Jan 20 02:03:08.723935 containerd[1643]: time="2026-01-20T02:03:08.710781091Z" level=info msg="container event discarded" container=b4628897cbcdfa66dcaff7e565e3facf6875e967af7b924419c5163f5a61c342 type=CONTAINER_CREATED_EVENT Jan 20 02:03:08.723935 containerd[1643]: time="2026-01-20T02:03:08.710861094Z" level=info msg="container event discarded" container=b4628897cbcdfa66dcaff7e565e3facf6875e967af7b924419c5163f5a61c342 type=CONTAINER_STARTED_EVENT Jan 20 02:03:09.402656 sshd[9738]: Accepted publickey for core from 10.0.0.1 port 60148 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:03:09.492406 kernel: kauditd_printk_skb: 13 callbacks suppressed Jan 20 02:03:09.492594 kernel: audit: type=1101 audit(1768874589.398:1203): pid=9738 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:09.398000 audit[9738]: USER_ACCT pid=9738 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:09.424641 sshd-session[9738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:03:09.413000 audit[9738]: CRED_ACQ pid=9738 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:09.572791 kernel: audit: type=1103 audit(1768874589.413:1204): pid=9738 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:09.575084 kernel: audit: type=1006 audit(1768874589.413:1205): pid=9738 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=55 res=1 Jan 20 02:03:09.575238 systemd-logind[1623]: New session 55 of user core. Jan 20 02:03:09.623610 kernel: audit: type=1300 audit(1768874589.413:1205): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1cd99f30 a2=3 a3=0 items=0 ppid=1 pid=9738 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=55 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:09.413000 audit[9738]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1cd99f30 a2=3 a3=0 items=0 ppid=1 pid=9738 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=55 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:09.699453 kernel: audit: type=1327 audit(1768874589.413:1205): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:03:09.413000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:03:09.750389 systemd[1]: Started session-55.scope - Session 55 of User core. Jan 20 02:03:09.802000 audit[9738]: USER_START pid=9738 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:09.928758 kernel: audit: type=1105 audit(1768874589.802:1206): pid=9738 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:09.934152 kernel: audit: type=1103 audit(1768874589.837:1207): pid=9745 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:09.837000 audit[9745]: CRED_ACQ pid=9745 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:09.969630 containerd[1643]: time="2026-01-20T02:03:09.969461461Z" level=info msg="container event discarded" container=abba75bf480412f1412b57fb168305e9932482eeb98eb469b4628d020e5365dc type=CONTAINER_CREATED_EVENT Jan 20 02:03:09.976276 containerd[1643]: time="2026-01-20T02:03:09.970538151Z" level=info msg="container event discarded" container=abba75bf480412f1412b57fb168305e9932482eeb98eb469b4628d020e5365dc type=CONTAINER_STARTED_EVENT Jan 20 02:03:10.950768 containerd[1643]: time="2026-01-20T02:03:10.950454227Z" level=info msg="container event discarded" container=fa502b3c3b62c8a0cba0636ff66480a94da6ccb8ab38b54e4096d05aa4bf3e91 type=CONTAINER_STARTED_EVENT Jan 20 02:03:13.827494 kubelet[3123]: E0120 02:03:13.823196 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 02:03:14.734000 audit[9761]: NETFILTER_CFG table=filter:146 family=2 entries=14 op=nft_register_rule pid=9761 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:03:14.802205 kernel: audit: type=1325 audit(1768874594.734:1208): table=filter:146 family=2 entries=14 op=nft_register_rule pid=9761 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:03:14.734000 audit[9761]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff2d2ed220 a2=0 a3=7fff2d2ed20c items=0 ppid=3237 pid=9761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:14.817296 kubelet[3123]: E0120 02:03:14.806857 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 02:03:14.817296 kubelet[3123]: E0120 02:03:14.807440 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 02:03:14.936916 kernel: audit: type=1300 audit(1768874594.734:1208): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff2d2ed220 a2=0 a3=7fff2d2ed20c items=0 ppid=3237 pid=9761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:14.937122 kernel: audit: type=1327 audit(1768874594.734:1208): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:03:14.734000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:03:14.944863 kernel: audit: type=1325 audit(1768874594.831:1209): table=nat:147 family=2 entries=20 op=nft_register_rule pid=9761 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:03:14.831000 audit[9761]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=9761 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:03:14.956877 sshd-session[9738]: pam_unix(sshd:session): session closed for user core Jan 20 02:03:14.965876 sshd[9745]: Connection closed by 10.0.0.1 port 60148 Jan 20 02:03:14.831000 audit[9761]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff2d2ed220 a2=0 a3=7fff2d2ed20c items=0 ppid=3237 pid=9761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:14.831000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:03:15.197935 kernel: audit: type=1300 audit(1768874594.831:1209): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff2d2ed220 a2=0 a3=7fff2d2ed20c items=0 ppid=3237 pid=9761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:15.198158 kernel: audit: type=1327 audit(1768874594.831:1209): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:03:15.198216 kernel: audit: type=1106 audit(1768874594.983:1210): pid=9738 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:14.983000 audit[9738]: USER_END pid=9738 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:15.299144 systemd[1]: sshd@54-10.0.0.44:22-10.0.0.1:60148.service: Deactivated successfully. Jan 20 02:03:14.984000 audit[9738]: CRED_DISP pid=9738 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:15.326942 systemd[1]: session-55.scope: Deactivated successfully. Jan 20 02:03:15.327546 systemd[1]: session-55.scope: Consumed 1.105s CPU time, 42.4M memory peak. Jan 20 02:03:15.354614 systemd-logind[1623]: Session 55 logged out. Waiting for processes to exit. Jan 20 02:03:15.405112 systemd[1]: Started sshd@55-10.0.0.44:22-10.0.0.1:54336.service - OpenSSH per-connection server daemon (10.0.0.1:54336). Jan 20 02:03:15.422926 kernel: audit: type=1104 audit(1768874594.984:1211): pid=9738 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:15.424886 kernel: audit: type=1325 audit(1768874595.120:1212): table=filter:148 family=2 entries=26 op=nft_register_rule pid=9765 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:03:15.120000 audit[9765]: NETFILTER_CFG table=filter:148 family=2 entries=26 op=nft_register_rule pid=9765 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:03:15.484226 kernel: audit: type=1300 audit(1768874595.120:1212): arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff3145f9d0 a2=0 a3=7fff3145f9bc items=0 ppid=3237 pid=9765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:15.120000 audit[9765]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff3145f9d0 a2=0 a3=7fff3145f9bc items=0 ppid=3237 pid=9765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:15.467357 systemd-logind[1623]: Removed session 55. Jan 20 02:03:15.120000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:03:15.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@54-10.0.0.44:22-10.0.0.1:60148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:15.378000 audit[9765]: NETFILTER_CFG table=nat:149 family=2 entries=20 op=nft_register_rule pid=9765 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:03:15.378000 audit[9765]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff3145f9d0 a2=0 a3=0 items=0 ppid=3237 pid=9765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:15.378000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:03:15.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@55-10.0.0.44:22-10.0.0.1:54336 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:15.955270 kubelet[3123]: E0120 02:03:15.947257 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 02:03:16.548000 audit[9769]: USER_ACCT pid=9769 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:16.568830 sshd[9769]: Accepted publickey for core from 10.0.0.1 port 54336 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:03:16.580000 audit[9769]: CRED_ACQ pid=9769 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:16.580000 audit[9769]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3e2cb9e0 a2=3 a3=0 items=0 ppid=1 pid=9769 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=56 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:16.580000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:03:16.585060 sshd-session[9769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:03:16.693106 systemd-logind[1623]: New session 56 of user core. Jan 20 02:03:16.736104 systemd[1]: Started session-56.scope - Session 56 of User core. Jan 20 02:03:16.811000 audit[9769]: USER_START pid=9769 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:16.850031 kubelet[3123]: E0120 02:03:16.847140 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 02:03:16.871000 audit[9772]: CRED_ACQ pid=9772 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:18.777491 kubelet[3123]: E0120 02:03:18.771891 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:03:19.855973 kubelet[3123]: E0120 02:03:19.855586 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8bc549748-txp25" podUID="44944462-7130-49ee-b7c5-4cb73dea6058" Jan 20 02:03:19.942846 sshd[9772]: Connection closed by 10.0.0.1 port 54336 Jan 20 02:03:19.897859 sshd-session[9769]: pam_unix(sshd:session): session closed for user core Jan 20 02:03:20.078820 kernel: kauditd_printk_skb: 13 callbacks suppressed Jan 20 02:03:20.078994 kernel: audit: type=1106 audit(1768874599.942:1221): pid=9769 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:19.942000 audit[9769]: USER_END pid=9769 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:19.942000 audit[9769]: CRED_DISP pid=9769 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:20.241333 containerd[1643]: time="2026-01-20T02:03:20.151317231Z" level=info msg="container event discarded" container=e2931e79b3bcff5ee91130373062f5ef39acfc966ff11600cddef925ff2ec4b7 type=CONTAINER_CREATED_EVENT Jan 20 02:03:20.241333 containerd[1643]: time="2026-01-20T02:03:20.151350751Z" level=info msg="container event discarded" container=e2931e79b3bcff5ee91130373062f5ef39acfc966ff11600cddef925ff2ec4b7 type=CONTAINER_STARTED_EVENT Jan 20 02:03:20.245512 kernel: audit: type=1104 audit(1768874599.942:1222): pid=9769 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:20.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@55-10.0.0.44:22-10.0.0.1:54336 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:20.261588 systemd[1]: sshd@55-10.0.0.44:22-10.0.0.1:54336.service: Deactivated successfully. Jan 20 02:03:20.319482 kernel: audit: type=1131 audit(1768874600.260:1223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@55-10.0.0.44:22-10.0.0.1:54336 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:20.319601 kernel: audit: type=1130 audit(1768874600.317:1224): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@56-10.0.0.44:22-10.0.0.1:54340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:20.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@56-10.0.0.44:22-10.0.0.1:54340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:20.292413 systemd[1]: session-56.scope: Deactivated successfully. Jan 20 02:03:20.318356 systemd[1]: Started sshd@56-10.0.0.44:22-10.0.0.1:54340.service - OpenSSH per-connection server daemon (10.0.0.1:54340). Jan 20 02:03:20.326520 systemd-logind[1623]: Session 56 logged out. Waiting for processes to exit. Jan 20 02:03:20.337541 systemd-logind[1623]: Removed session 56. Jan 20 02:03:20.926000 audit[9810]: USER_ACCT pid=9810 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:20.971904 sshd[9810]: Accepted publickey for core from 10.0.0.1 port 54340 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:03:21.000203 sshd-session[9810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:03:21.040662 kernel: audit: type=1101 audit(1768874600.926:1225): pid=9810 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:21.070903 kernel: audit: type=1103 audit(1768874600.995:1226): pid=9810 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:20.995000 audit[9810]: CRED_ACQ pid=9810 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:21.140842 systemd-logind[1623]: New session 57 of user core. Jan 20 02:03:21.220335 kernel: audit: type=1006 audit(1768874600.995:1227): pid=9810 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=57 res=1 Jan 20 02:03:21.220476 kernel: audit: type=1300 audit(1768874600.995:1227): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda7c393b0 a2=3 a3=0 items=0 ppid=1 pid=9810 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=57 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:20.995000 audit[9810]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda7c393b0 a2=3 a3=0 items=0 ppid=1 pid=9810 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=57 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:20.995000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:03:21.367810 kernel: audit: type=1327 audit(1768874600.995:1227): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:03:21.395888 systemd[1]: Started session-57.scope - Session 57 of User core. Jan 20 02:03:21.502000 audit[9810]: USER_START pid=9810 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:21.581294 kernel: audit: type=1105 audit(1768874601.502:1228): pid=9810 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:21.561000 audit[9814]: CRED_ACQ pid=9814 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:22.978816 kubelet[3123]: E0120 02:03:22.963452 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:03:23.808788 sshd[9814]: Connection closed by 10.0.0.1 port 54340 Jan 20 02:03:23.806973 sshd-session[9810]: pam_unix(sshd:session): session closed for user core Jan 20 02:03:23.805000 audit[9810]: USER_END pid=9810 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:23.805000 audit[9810]: CRED_DISP pid=9810 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:23.827607 systemd-logind[1623]: Session 57 logged out. Waiting for processes to exit. Jan 20 02:03:23.843181 systemd[1]: sshd@56-10.0.0.44:22-10.0.0.1:54340.service: Deactivated successfully. Jan 20 02:03:23.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@56-10.0.0.44:22-10.0.0.1:54340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:23.872303 systemd[1]: session-57.scope: Deactivated successfully. Jan 20 02:03:23.902129 systemd-logind[1623]: Removed session 57. Jan 20 02:03:24.773301 kubelet[3123]: E0120 02:03:24.773134 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 02:03:26.802277 containerd[1643]: time="2026-01-20T02:03:26.795147787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:03:27.008053 containerd[1643]: time="2026-01-20T02:03:27.007066730Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:03:27.023844 containerd[1643]: time="2026-01-20T02:03:27.023761211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:03:27.024402 containerd[1643]: time="2026-01-20T02:03:27.024118503Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:03:27.032374 kubelet[3123]: E0120 02:03:27.027218 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:03:27.032374 kubelet[3123]: E0120 02:03:27.027300 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:03:27.032374 kubelet[3123]: E0120 02:03:27.027642 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gbt2k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74b798b596-wbvft_calico-apiserver(589f656f-1e0a-4667-bc0d-42908aab3340): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:03:27.032374 kubelet[3123]: E0120 02:03:27.029335 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 02:03:27.811580 kubelet[3123]: E0120 02:03:27.811113 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 02:03:28.788501 kubelet[3123]: E0120 02:03:28.786501 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 02:03:28.806113 kubelet[3123]: E0120 02:03:28.794501 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 02:03:29.026878 kernel: kauditd_printk_skb: 4 callbacks suppressed Jan 20 02:03:29.027073 kernel: audit: type=1130 audit(1768874608.989:1233): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@57-10.0.0.44:22-10.0.0.1:52448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:28.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@57-10.0.0.44:22-10.0.0.1:52448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:28.994297 systemd[1]: Started sshd@57-10.0.0.44:22-10.0.0.1:52448.service - OpenSSH per-connection server daemon (10.0.0.1:52448). Jan 20 02:03:29.441000 audit[9829]: USER_ACCT pid=9829 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:29.523662 sshd[9829]: Accepted publickey for core from 10.0.0.1 port 52448 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:03:29.533362 kernel: audit: type=1101 audit(1768874609.441:1234): pid=9829 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:29.529000 audit[9829]: CRED_ACQ pid=9829 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:29.534391 sshd-session[9829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:03:29.595614 systemd-logind[1623]: New session 58 of user core. Jan 20 02:03:29.613295 kernel: audit: type=1103 audit(1768874609.529:1235): pid=9829 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:29.613441 kernel: audit: type=1006 audit(1768874609.529:1236): pid=9829 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=58 res=1 Jan 20 02:03:29.529000 audit[9829]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca7cccc30 a2=3 a3=0 items=0 ppid=1 pid=9829 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=58 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:29.705984 systemd[1]: Started session-58.scope - Session 58 of User core. Jan 20 02:03:29.529000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:03:29.826306 kernel: audit: type=1300 audit(1768874609.529:1236): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca7cccc30 a2=3 a3=0 items=0 ppid=1 pid=9829 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=58 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:29.826461 kernel: audit: type=1327 audit(1768874609.529:1236): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:03:29.826571 kernel: audit: type=1105 audit(1768874609.790:1237): pid=9829 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:29.790000 audit[9829]: USER_START pid=9829 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:29.906036 kernel: audit: type=1103 audit(1768874609.828:1238): pid=9832 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:29.828000 audit[9832]: CRED_ACQ pid=9832 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:30.796281 kubelet[3123]: E0120 02:03:30.796170 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:03:30.910948 kubelet[3123]: E0120 02:03:30.897248 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8bc549748-txp25" podUID="44944462-7130-49ee-b7c5-4cb73dea6058" Jan 20 02:03:31.196489 sshd[9832]: Connection closed by 10.0.0.1 port 52448 Jan 20 02:03:31.216988 sshd-session[9829]: pam_unix(sshd:session): session closed for user core Jan 20 02:03:31.228000 audit[9829]: USER_END pid=9829 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:31.260539 systemd[1]: sshd@57-10.0.0.44:22-10.0.0.1:52448.service: Deactivated successfully. Jan 20 02:03:31.283525 systemd[1]: session-58.scope: Deactivated successfully. Jan 20 02:03:31.298468 systemd-logind[1623]: Session 58 logged out. Waiting for processes to exit. Jan 20 02:03:31.324835 systemd-logind[1623]: Removed session 58. Jan 20 02:03:31.228000 audit[9829]: CRED_DISP pid=9829 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:31.367281 kernel: audit: type=1106 audit(1768874611.228:1239): pid=9829 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:31.367444 kernel: audit: type=1104 audit(1768874611.228:1240): pid=9829 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:31.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@57-10.0.0.44:22-10.0.0.1:52448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:31.770771 kubelet[3123]: E0120 02:03:31.768966 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:03:35.790474 kubelet[3123]: E0120 02:03:35.770606 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:03:36.342635 systemd[1]: Started sshd@58-10.0.0.44:22-10.0.0.1:48778.service - OpenSSH per-connection server daemon (10.0.0.1:48778). Jan 20 02:03:36.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@58-10.0.0.44:22-10.0.0.1:48778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:36.378914 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:03:36.383287 kernel: audit: type=1130 audit(1768874616.342:1242): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@58-10.0.0.44:22-10.0.0.1:48778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:37.089000 audit[9846]: USER_ACCT pid=9846 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:37.222301 kernel: audit: type=1101 audit(1768874617.089:1243): pid=9846 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:37.124847 sshd-session[9846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:03:37.234883 sshd[9846]: Accepted publickey for core from 10.0.0.1 port 48778 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:03:37.115000 audit[9846]: CRED_ACQ pid=9846 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:37.304910 kernel: audit: type=1103 audit(1768874617.115:1244): pid=9846 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:37.342982 systemd-logind[1623]: New session 59 of user core. Jan 20 02:03:37.115000 audit[9846]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdbfd132b0 a2=3 a3=0 items=0 ppid=1 pid=9846 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=59 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:37.519651 kernel: audit: type=1006 audit(1768874617.115:1245): pid=9846 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=59 res=1 Jan 20 02:03:37.521604 kernel: audit: type=1300 audit(1768874617.115:1245): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdbfd132b0 a2=3 a3=0 items=0 ppid=1 pid=9846 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=59 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:37.115000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:03:37.559269 kernel: audit: type=1327 audit(1768874617.115:1245): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:03:37.560123 systemd[1]: Started session-59.scope - Session 59 of User core. Jan 20 02:03:37.614000 audit[9846]: USER_START pid=9846 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:37.714623 kernel: audit: type=1105 audit(1768874617.614:1246): pid=9846 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:37.714872 kernel: audit: type=1103 audit(1768874617.699:1247): pid=9849 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:37.699000 audit[9849]: CRED_ACQ pid=9849 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:38.801970 kubelet[3123]: E0120 02:03:38.799357 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 02:03:38.942359 sshd[9849]: Connection closed by 10.0.0.1 port 48778 Jan 20 02:03:38.963981 sshd-session[9846]: pam_unix(sshd:session): session closed for user core Jan 20 02:03:39.178820 kernel: audit: type=1106 audit(1768874619.041:1248): pid=9846 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:39.041000 audit[9846]: USER_END pid=9846 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:39.120121 systemd[1]: sshd@58-10.0.0.44:22-10.0.0.1:48778.service: Deactivated successfully. Jan 20 02:03:39.139869 systemd[1]: session-59.scope: Deactivated successfully. Jan 20 02:03:39.041000 audit[9846]: CRED_DISP pid=9846 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:39.303530 systemd-logind[1623]: Session 59 logged out. Waiting for processes to exit. Jan 20 02:03:39.326142 systemd-logind[1623]: Removed session 59. Jan 20 02:03:39.402146 kernel: audit: type=1104 audit(1768874619.041:1249): pid=9846 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:39.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@58-10.0.0.44:22-10.0.0.1:48778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:39.798815 containerd[1643]: time="2026-01-20T02:03:39.797195373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:03:39.807534 kubelet[3123]: E0120 02:03:39.805611 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 02:03:40.079561 containerd[1643]: time="2026-01-20T02:03:40.074222216Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:03:40.085895 containerd[1643]: time="2026-01-20T02:03:40.084607852Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:03:40.086055 containerd[1643]: time="2026-01-20T02:03:40.085815314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:03:40.086813 kubelet[3123]: E0120 02:03:40.086646 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:03:40.094302 kubelet[3123]: E0120 02:03:40.094096 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:03:40.098670 kubelet[3123]: E0120 02:03:40.098588 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvpjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74b798b596-r7ptx_calico-apiserver(03f653dd-0210-41e9-9d70-a3905826baa1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:03:40.107385 kubelet[3123]: E0120 02:03:40.105988 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 02:03:42.915057 kubelet[3123]: E0120 02:03:42.910977 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 02:03:43.867449 kubelet[3123]: E0120 02:03:43.865367 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 02:03:44.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@59-10.0.0.44:22-10.0.0.1:48782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:44.035252 systemd[1]: Started sshd@59-10.0.0.44:22-10.0.0.1:48782.service - OpenSSH per-connection server daemon (10.0.0.1:48782). Jan 20 02:03:44.171054 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:03:44.179142 kernel: audit: type=1130 audit(1768874624.034:1251): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@59-10.0.0.44:22-10.0.0.1:48782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:44.807840 kubelet[3123]: E0120 02:03:44.807145 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8bc549748-txp25" podUID="44944462-7130-49ee-b7c5-4cb73dea6058" Jan 20 02:03:44.806000 audit[9873]: USER_ACCT pid=9873 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:44.850650 sshd[9873]: Accepted publickey for core from 10.0.0.1 port 48782 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:03:44.857038 sshd-session[9873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:03:44.850000 audit[9873]: CRED_ACQ pid=9873 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:44.993386 kernel: audit: type=1101 audit(1768874624.806:1252): pid=9873 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:44.993798 kernel: audit: type=1103 audit(1768874624.850:1253): pid=9873 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:45.031877 kernel: audit: type=1006 audit(1768874624.850:1254): pid=9873 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=60 res=1 Jan 20 02:03:45.026909 systemd-logind[1623]: New session 60 of user core. Jan 20 02:03:44.850000 audit[9873]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdbac87040 a2=3 a3=0 items=0 ppid=1 pid=9873 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=60 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:45.124601 kernel: audit: type=1300 audit(1768874624.850:1254): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdbac87040 a2=3 a3=0 items=0 ppid=1 pid=9873 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=60 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:45.128112 kernel: audit: type=1327 audit(1768874624.850:1254): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:03:44.850000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:03:45.189403 systemd[1]: Started session-60.scope - Session 60 of User core. Jan 20 02:03:45.252000 audit[9873]: USER_START pid=9873 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:45.324000 audit[9876]: CRED_ACQ pid=9876 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:45.588831 kernel: audit: type=1105 audit(1768874625.252:1255): pid=9873 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:45.589024 kernel: audit: type=1103 audit(1768874625.324:1256): pid=9876 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:46.917597 sshd[9876]: Connection closed by 10.0.0.1 port 48782 Jan 20 02:03:46.928058 sshd-session[9873]: pam_unix(sshd:session): session closed for user core Jan 20 02:03:47.018000 audit[9873]: USER_END pid=9873 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:47.110623 systemd[1]: sshd@59-10.0.0.44:22-10.0.0.1:48782.service: Deactivated successfully. Jan 20 02:03:47.177048 kernel: audit: type=1106 audit(1768874627.018:1257): pid=9873 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:47.141358 systemd[1]: session-60.scope: Deactivated successfully. Jan 20 02:03:47.018000 audit[9873]: CRED_DISP pid=9873 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:47.195592 systemd-logind[1623]: Session 60 logged out. Waiting for processes to exit. Jan 20 02:03:47.208898 systemd-logind[1623]: Removed session 60. Jan 20 02:03:47.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@59-10.0.0.44:22-10.0.0.1:48782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:47.304813 kernel: audit: type=1104 audit(1768874627.018:1258): pid=9873 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:50.099371 containerd[1643]: time="2026-01-20T02:03:50.097489761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 02:03:50.326589 containerd[1643]: time="2026-01-20T02:03:50.325315700Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:03:50.341836 containerd[1643]: time="2026-01-20T02:03:50.333107503Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 02:03:50.341836 containerd[1643]: time="2026-01-20T02:03:50.333194721Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 02:03:50.345540 kubelet[3123]: E0120 02:03:50.343818 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 02:03:50.345540 kubelet[3123]: E0120 02:03:50.343898 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 02:03:50.353335 kubelet[3123]: E0120 02:03:50.344099 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vgfnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-szpvj_calico-system(285811f9-e547-431f-a7b0-90e1226d2f4d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 02:03:50.361491 kubelet[3123]: E0120 02:03:50.353917 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 02:03:50.795440 kubelet[3123]: E0120 02:03:50.794337 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 02:03:52.031997 systemd[1]: Started sshd@60-10.0.0.44:22-10.0.0.1:59948.service - OpenSSH per-connection server daemon (10.0.0.1:59948). Jan 20 02:03:52.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@60-10.0.0.44:22-10.0.0.1:59948 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:52.059545 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:03:52.059646 kernel: audit: type=1130 audit(1768874632.032:1260): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@60-10.0.0.44:22-10.0.0.1:59948 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:52.576000 audit[9917]: USER_ACCT pid=9917 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:52.598563 sshd-session[9917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:03:52.617538 sshd[9917]: Accepted publickey for core from 10.0.0.1 port 59948 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:03:52.684331 kernel: audit: type=1101 audit(1768874632.576:1261): pid=9917 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:52.684477 kernel: audit: type=1103 audit(1768874632.592:1262): pid=9917 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:52.592000 audit[9917]: CRED_ACQ pid=9917 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:52.741498 systemd-logind[1623]: New session 61 of user core. Jan 20 02:03:52.782390 kernel: audit: type=1006 audit(1768874632.592:1263): pid=9917 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=61 res=1 Jan 20 02:03:52.592000 audit[9917]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf1d05bc0 a2=3 a3=0 items=0 ppid=1 pid=9917 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=61 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:52.933394 kernel: audit: type=1300 audit(1768874632.592:1263): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf1d05bc0 a2=3 a3=0 items=0 ppid=1 pid=9917 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=61 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:52.999906 kernel: audit: type=1327 audit(1768874632.592:1263): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:03:52.592000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:03:52.996652 systemd[1]: Started session-61.scope - Session 61 of User core. Jan 20 02:03:53.071000 audit[9917]: USER_START pid=9917 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:53.098000 audit[9920]: CRED_ACQ pid=9920 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:53.321315 kernel: audit: type=1105 audit(1768874633.071:1264): pid=9917 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:53.321490 kernel: audit: type=1103 audit(1768874633.098:1265): pid=9920 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:53.812227 kubelet[3123]: E0120 02:03:53.787462 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:03:54.488227 sshd[9920]: Connection closed by 10.0.0.1 port 59948 Jan 20 02:03:54.485862 sshd-session[9917]: pam_unix(sshd:session): session closed for user core Jan 20 02:03:54.493000 audit[9917]: USER_END pid=9917 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:54.526313 systemd[1]: sshd@60-10.0.0.44:22-10.0.0.1:59948.service: Deactivated successfully. Jan 20 02:03:54.567889 systemd[1]: session-61.scope: Deactivated successfully. Jan 20 02:03:54.493000 audit[9917]: CRED_DISP pid=9917 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:54.598247 systemd-logind[1623]: Session 61 logged out. Waiting for processes to exit. Jan 20 02:03:54.620286 systemd-logind[1623]: Removed session 61. Jan 20 02:03:54.659072 kernel: audit: type=1106 audit(1768874634.493:1266): pid=9917 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:54.659230 kernel: audit: type=1104 audit(1768874634.493:1267): pid=9917 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:54.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@60-10.0.0.44:22-10.0.0.1:59948 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:54.794569 kubelet[3123]: E0120 02:03:54.784809 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 02:03:54.899345 containerd[1643]: time="2026-01-20T02:03:54.898305120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 02:03:55.066545 containerd[1643]: time="2026-01-20T02:03:55.063200242Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:03:55.084852 containerd[1643]: time="2026-01-20T02:03:55.084574793Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 02:03:55.085541 containerd[1643]: time="2026-01-20T02:03:55.085234153Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 02:03:55.091452 kubelet[3123]: E0120 02:03:55.088500 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 02:03:55.105241 kubelet[3123]: E0120 02:03:55.096628 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 02:03:55.105241 kubelet[3123]: E0120 02:03:55.104386 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pb2hv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x6f5h_calico-system(eeb09d5e-8a63-4fca-910b-ea49fa1ecf05): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 02:03:55.127196 containerd[1643]: time="2026-01-20T02:03:55.126580317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 02:03:55.376579 containerd[1643]: time="2026-01-20T02:03:55.373646439Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:03:55.427820 containerd[1643]: time="2026-01-20T02:03:55.427560964Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 02:03:55.428425 containerd[1643]: time="2026-01-20T02:03:55.428136143Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 02:03:55.436749 kubelet[3123]: E0120 02:03:55.431208 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 02:03:55.436749 kubelet[3123]: E0120 02:03:55.431286 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 02:03:55.436749 kubelet[3123]: E0120 02:03:55.431543 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pb2hv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x6f5h_calico-system(eeb09d5e-8a63-4fca-910b-ea49fa1ecf05): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 02:03:55.437288 kubelet[3123]: E0120 02:03:55.436874 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 02:03:55.832007 kubelet[3123]: E0120 02:03:55.826459 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8bc549748-txp25" podUID="44944462-7130-49ee-b7c5-4cb73dea6058" Jan 20 02:03:57.830346 containerd[1643]: time="2026-01-20T02:03:57.825043332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 02:03:58.244319 containerd[1643]: time="2026-01-20T02:03:58.242552573Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:03:58.334051 containerd[1643]: time="2026-01-20T02:03:58.331268783Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 02:03:58.334051 containerd[1643]: time="2026-01-20T02:03:58.331532960Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 02:03:58.360876 kubelet[3123]: E0120 02:03:58.355080 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 02:03:58.360876 kubelet[3123]: E0120 02:03:58.355170 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 02:03:58.360876 kubelet[3123]: E0120 02:03:58.355427 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlmjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-86d7bc7b4f-k5t2j_calico-system(68cbc571-4445-4166-912c-8fdfe252aae2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 02:03:58.378309 kubelet[3123]: E0120 02:03:58.362075 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 02:03:58.498172 containerd[1643]: time="2026-01-20T02:03:58.460043657Z" level=info msg="container event discarded" container=034fbd07ee5273becdff9c90f00b3b61cc80f1fc3ab426b861c24502dfd2f32a type=CONTAINER_DELETED_EVENT Jan 20 02:03:59.686052 systemd[1]: Started sshd@61-10.0.0.44:22-10.0.0.1:49760.service - OpenSSH per-connection server daemon (10.0.0.1:49760). Jan 20 02:03:59.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@61-10.0.0.44:22-10.0.0.1:49760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:59.722497 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:03:59.722645 kernel: audit: type=1130 audit(1768874639.686:1269): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@61-10.0.0.44:22-10.0.0.1:49760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:00.500423 sshd[9934]: Accepted publickey for core from 10.0.0.1 port 49760 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:04:00.496000 audit[9934]: USER_ACCT pid=9934 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:00.519010 sshd-session[9934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:04:00.608027 systemd-logind[1623]: New session 62 of user core. Jan 20 02:04:00.630226 kernel: audit: type=1101 audit(1768874640.496:1270): pid=9934 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:00.630357 kernel: audit: type=1103 audit(1768874640.508:1271): pid=9934 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:00.508000 audit[9934]: CRED_ACQ pid=9934 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:00.728448 kernel: audit: type=1006 audit(1768874640.508:1272): pid=9934 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=62 res=1 Jan 20 02:04:00.730862 kernel: audit: type=1300 audit(1768874640.508:1272): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd5d98d80 a2=3 a3=0 items=0 ppid=1 pid=9934 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=62 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:00.508000 audit[9934]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd5d98d80 a2=3 a3=0 items=0 ppid=1 pid=9934 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=62 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:00.733329 systemd[1]: Started session-62.scope - Session 62 of User core. Jan 20 02:04:00.829311 kernel: audit: type=1327 audit(1768874640.508:1272): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:04:00.508000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:04:00.849367 kernel: audit: type=1105 audit(1768874640.798:1273): pid=9934 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:00.798000 audit[9934]: USER_START pid=9934 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:00.877146 kubelet[3123]: E0120 02:04:00.872978 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 02:04:00.822000 audit[9937]: CRED_ACQ pid=9937 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:01.052322 kernel: audit: type=1103 audit(1768874640.822:1274): pid=9937 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:03.223980 sshd[9937]: Connection closed by 10.0.0.1 port 49760 Jan 20 02:04:03.236161 sshd-session[9934]: pam_unix(sshd:session): session closed for user core Jan 20 02:04:03.260000 audit[9934]: USER_END pid=9934 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:03.308353 systemd[1]: sshd@61-10.0.0.44:22-10.0.0.1:49760.service: Deactivated successfully. Jan 20 02:04:03.370356 kernel: audit: type=1106 audit(1768874643.260:1275): pid=9934 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:03.392044 systemd[1]: session-62.scope: Deactivated successfully. Jan 20 02:04:03.260000 audit[9934]: CRED_DISP pid=9934 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:03.438012 systemd-logind[1623]: Session 62 logged out. Waiting for processes to exit. Jan 20 02:04:03.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@61-10.0.0.44:22-10.0.0.1:49760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:03.471905 kernel: audit: type=1104 audit(1768874643.260:1276): pid=9934 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:03.532416 systemd-logind[1623]: Removed session 62. Jan 20 02:04:04.810872 kubelet[3123]: E0120 02:04:04.805976 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 02:04:04.821826 kubelet[3123]: E0120 02:04:04.812113 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:04:06.823623 kubelet[3123]: E0120 02:04:06.821060 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 02:04:06.823623 kubelet[3123]: E0120 02:04:06.823149 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:04:07.811597 containerd[1643]: time="2026-01-20T02:04:07.811530195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 02:04:08.075395 containerd[1643]: time="2026-01-20T02:04:08.073650513Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:04:08.131884 containerd[1643]: time="2026-01-20T02:04:08.123859895Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 02:04:08.131884 containerd[1643]: time="2026-01-20T02:04:08.123979733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 02:04:08.132223 kubelet[3123]: E0120 02:04:08.127906 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 02:04:08.132223 kubelet[3123]: E0120 02:04:08.127974 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 02:04:08.132223 kubelet[3123]: E0120 02:04:08.128499 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:424ac945776d4646865b6465d767c112,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-trdb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8bc549748-txp25_calico-system(44944462-7130-49ee-b7c5-4cb73dea6058): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 02:04:08.152874 containerd[1643]: time="2026-01-20T02:04:08.151999329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 02:04:08.493201 containerd[1643]: time="2026-01-20T02:04:08.472509962Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:04:08.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@62-10.0.0.44:22-10.0.0.1:59662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:08.566135 systemd[1]: Started sshd@62-10.0.0.44:22-10.0.0.1:59662.service - OpenSSH per-connection server daemon (10.0.0.1:59662). Jan 20 02:04:08.616072 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:04:08.616561 kernel: audit: type=1130 audit(1768874648.569:1278): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@62-10.0.0.44:22-10.0.0.1:59662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:08.663141 containerd[1643]: time="2026-01-20T02:04:08.662671283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 02:04:08.673499 containerd[1643]: time="2026-01-20T02:04:08.670010027Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 02:04:08.762845 kubelet[3123]: E0120 02:04:08.762131 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 02:04:08.762845 kubelet[3123]: E0120 02:04:08.762207 3123 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 02:04:08.768888 kubelet[3123]: E0120 02:04:08.768828 3123 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-trdb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8bc549748-txp25_calico-system(44944462-7130-49ee-b7c5-4cb73dea6058): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 02:04:08.772355 kubelet[3123]: E0120 02:04:08.772230 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8bc549748-txp25" podUID="44944462-7130-49ee-b7c5-4cb73dea6058" Jan 20 02:04:09.814809 kubelet[3123]: E0120 02:04:09.811149 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 02:04:09.899575 sshd[9951]: Accepted publickey for core from 10.0.0.1 port 59662 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:04:09.894000 audit[9951]: USER_ACCT pid=9951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:09.910553 sshd-session[9951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:04:10.002403 kernel: audit: type=1101 audit(1768874649.894:1279): pid=9951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:09.991206 systemd-logind[1623]: New session 63 of user core. Jan 20 02:04:09.894000 audit[9951]: CRED_ACQ pid=9951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:10.026365 systemd[1]: Started session-63.scope - Session 63 of User core. Jan 20 02:04:10.145845 kernel: audit: type=1103 audit(1768874649.894:1280): pid=9951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:10.146036 kernel: audit: type=1006 audit(1768874649.905:1281): pid=9951 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=63 res=1 Jan 20 02:04:09.905000 audit[9951]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd95191440 a2=3 a3=0 items=0 ppid=1 pid=9951 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=63 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:10.242549 kernel: audit: type=1300 audit(1768874649.905:1281): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd95191440 a2=3 a3=0 items=0 ppid=1 pid=9951 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=63 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:10.242860 kernel: audit: type=1327 audit(1768874649.905:1281): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:04:09.905000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:04:10.273548 kernel: audit: type=1105 audit(1768874650.117:1282): pid=9951 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:10.117000 audit[9951]: USER_START pid=9951 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:10.204000 audit[9956]: CRED_ACQ pid=9956 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:10.444047 kernel: audit: type=1103 audit(1768874650.204:1283): pid=9956 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:11.792762 sshd[9956]: Connection closed by 10.0.0.1 port 59662 Jan 20 02:04:11.816127 sshd-session[9951]: pam_unix(sshd:session): session closed for user core Jan 20 02:04:11.923759 kernel: audit: type=1106 audit(1768874651.838:1284): pid=9951 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:11.838000 audit[9951]: USER_END pid=9951 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:11.873581 systemd-logind[1623]: Session 63 logged out. Waiting for processes to exit. Jan 20 02:04:11.892805 systemd[1]: sshd@62-10.0.0.44:22-10.0.0.1:59662.service: Deactivated successfully. Jan 20 02:04:11.838000 audit[9951]: CRED_DISP pid=9951 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:11.938489 systemd[1]: session-63.scope: Deactivated successfully. Jan 20 02:04:11.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@62-10.0.0.44:22-10.0.0.1:59662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:11.945881 kernel: audit: type=1104 audit(1768874651.838:1285): pid=9951 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:11.987399 systemd-logind[1623]: Removed session 63. Jan 20 02:04:12.837269 kubelet[3123]: E0120 02:04:12.811066 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 02:04:14.806287 kubelet[3123]: E0120 02:04:14.805843 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 02:04:15.817845 kubelet[3123]: E0120 02:04:15.788829 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 02:04:16.928999 systemd[1]: Started sshd@63-10.0.0.44:22-10.0.0.1:34870.service - OpenSSH per-connection server daemon (10.0.0.1:34870). Jan 20 02:04:16.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@63-10.0.0.44:22-10.0.0.1:34870 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:16.968209 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:04:16.968364 kernel: audit: type=1130 audit(1768874656.925:1287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@63-10.0.0.44:22-10.0.0.1:34870 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:17.703000 audit[9969]: USER_ACCT pid=9969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:17.711400 sshd[9969]: Accepted publickey for core from 10.0.0.1 port 34870 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:04:17.727362 sshd-session[9969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:04:17.825219 kernel: audit: type=1101 audit(1768874657.703:1288): pid=9969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:17.825381 kernel: audit: type=1103 audit(1768874657.725:1289): pid=9969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:17.725000 audit[9969]: CRED_ACQ pid=9969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:17.963499 kernel: audit: type=1006 audit(1768874657.725:1290): pid=9969 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=64 res=1 Jan 20 02:04:17.963602 kernel: audit: type=1300 audit(1768874657.725:1290): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc768d020 a2=3 a3=0 items=0 ppid=1 pid=9969 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=64 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:17.725000 audit[9969]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc768d020 a2=3 a3=0 items=0 ppid=1 pid=9969 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=64 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:18.067599 kernel: audit: type=1327 audit(1768874657.725:1290): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:04:17.725000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:04:18.085480 systemd-logind[1623]: New session 64 of user core. Jan 20 02:04:18.144226 systemd[1]: Started session-64.scope - Session 64 of User core. Jan 20 02:04:18.209000 audit[9969]: USER_START pid=9969 uid=0 auid=500 ses=64 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:18.307035 kernel: audit: type=1105 audit(1768874658.209:1291): pid=9969 uid=0 auid=500 ses=64 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:18.246000 audit[9991]: CRED_ACQ pid=9991 uid=0 auid=500 ses=64 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:18.416142 kernel: audit: type=1103 audit(1768874658.246:1292): pid=9991 uid=0 auid=500 ses=64 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:19.616961 sshd[9991]: Connection closed by 10.0.0.1 port 34870 Jan 20 02:04:19.616292 sshd-session[9969]: pam_unix(sshd:session): session closed for user core Jan 20 02:04:19.628000 audit[9969]: USER_END pid=9969 uid=0 auid=500 ses=64 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:19.778526 kernel: audit: type=1106 audit(1768874659.628:1293): pid=9969 uid=0 auid=500 ses=64 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:19.628000 audit[9969]: CRED_DISP pid=9969 uid=0 auid=500 ses=64 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:19.965220 kernel: audit: type=1104 audit(1768874659.628:1294): pid=9969 uid=0 auid=500 ses=64 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:19.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@63-10.0.0.44:22-10.0.0.1:34870 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:19.961163 systemd[1]: sshd@63-10.0.0.44:22-10.0.0.1:34870.service: Deactivated successfully. Jan 20 02:04:19.971499 systemd-logind[1623]: Session 64 logged out. Waiting for processes to exit. Jan 20 02:04:19.987620 systemd[1]: session-64.scope: Deactivated successfully. Jan 20 02:04:20.071397 systemd-logind[1623]: Removed session 64. Jan 20 02:04:21.806038 kubelet[3123]: E0120 02:04:21.805961 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 02:04:22.800641 kubelet[3123]: E0120 02:04:22.786224 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8bc549748-txp25" podUID="44944462-7130-49ee-b7c5-4cb73dea6058" Jan 20 02:04:24.697888 systemd[1]: Started sshd@64-10.0.0.44:22-10.0.0.1:52318.service - OpenSSH per-connection server daemon (10.0.0.1:52318). Jan 20 02:04:24.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@64-10.0.0.44:22-10.0.0.1:52318 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:24.723162 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:04:24.723312 kernel: audit: type=1130 audit(1768874664.696:1296): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@64-10.0.0.44:22-10.0.0.1:52318 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:24.790022 kubelet[3123]: E0120 02:04:24.786542 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:04:24.793847 kubelet[3123]: E0120 02:04:24.793417 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 02:04:25.269270 sshd[10012]: Accepted publickey for core from 10.0.0.1 port 52318 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:04:25.262000 audit[10012]: USER_ACCT pid=10012 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:25.298056 sshd-session[10012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:04:25.360132 systemd-logind[1623]: New session 65 of user core. Jan 20 02:04:25.371973 kernel: audit: type=1101 audit(1768874665.262:1297): pid=10012 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:25.275000 audit[10012]: CRED_ACQ pid=10012 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:25.459846 kernel: audit: type=1103 audit(1768874665.275:1298): pid=10012 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:25.505933 kernel: audit: type=1006 audit(1768874665.275:1299): pid=10012 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=65 res=1 Jan 20 02:04:25.275000 audit[10012]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea316f9f0 a2=3 a3=0 items=0 ppid=1 pid=10012 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=65 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:25.507448 systemd[1]: Started session-65.scope - Session 65 of User core. Jan 20 02:04:25.613820 kernel: audit: type=1300 audit(1768874665.275:1299): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea316f9f0 a2=3 a3=0 items=0 ppid=1 pid=10012 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=65 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:25.613975 kernel: audit: type=1327 audit(1768874665.275:1299): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:04:25.275000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:04:25.570000 audit[10012]: USER_START pid=10012 uid=0 auid=500 ses=65 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:25.697495 kernel: audit: type=1105 audit(1768874665.570:1300): pid=10012 uid=0 auid=500 ses=65 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:25.602000 audit[10016]: CRED_ACQ pid=10016 uid=0 auid=500 ses=65 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:25.751820 kernel: audit: type=1103 audit(1768874665.602:1301): pid=10016 uid=0 auid=500 ses=65 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:25.804938 kubelet[3123]: E0120 02:04:25.803366 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 02:04:26.654394 sshd[10016]: Connection closed by 10.0.0.1 port 52318 Jan 20 02:04:26.651188 sshd-session[10012]: pam_unix(sshd:session): session closed for user core Jan 20 02:04:26.692000 audit[10012]: USER_END pid=10012 uid=0 auid=500 ses=65 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:26.710328 systemd[1]: sshd@64-10.0.0.44:22-10.0.0.1:52318.service: Deactivated successfully. Jan 20 02:04:26.736178 systemd[1]: session-65.scope: Deactivated successfully. Jan 20 02:04:26.762971 systemd-logind[1623]: Session 65 logged out. Waiting for processes to exit. Jan 20 02:04:26.774122 kubelet[3123]: E0120 02:04:26.770855 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:04:26.791266 kernel: audit: type=1106 audit(1768874666.692:1302): pid=10012 uid=0 auid=500 ses=65 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:26.693000 audit[10012]: CRED_DISP pid=10012 uid=0 auid=500 ses=65 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:26.795902 systemd-logind[1623]: Removed session 65. Jan 20 02:04:26.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@64-10.0.0.44:22-10.0.0.1:52318 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:26.848527 kernel: audit: type=1104 audit(1768874666.693:1303): pid=10012 uid=0 auid=500 ses=65 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:27.810898 kubelet[3123]: E0120 02:04:27.793140 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 02:04:29.792451 kubelet[3123]: E0120 02:04:29.791050 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 02:04:31.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@65-10.0.0.44:22-10.0.0.1:52326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:31.748568 systemd[1]: Started sshd@65-10.0.0.44:22-10.0.0.1:52326.service - OpenSSH per-connection server daemon (10.0.0.1:52326). Jan 20 02:04:31.779881 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:04:31.782838 kernel: audit: type=1130 audit(1768874671.745:1305): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@65-10.0.0.44:22-10.0.0.1:52326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:32.525000 audit[10042]: USER_ACCT pid=10042 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:32.532881 sshd[10042]: Accepted publickey for core from 10.0.0.1 port 52326 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:04:32.544564 sshd-session[10042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:04:32.639630 systemd-logind[1623]: New session 66 of user core. Jan 20 02:04:32.642029 kernel: audit: type=1101 audit(1768874672.525:1306): pid=10042 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:32.542000 audit[10042]: CRED_ACQ pid=10042 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:32.903807 kernel: audit: type=1103 audit(1768874672.542:1307): pid=10042 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:32.904063 kernel: audit: type=1006 audit(1768874672.542:1308): pid=10042 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=66 res=1 Jan 20 02:04:32.904121 kernel: audit: type=1300 audit(1768874672.542:1308): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb56e6290 a2=3 a3=0 items=0 ppid=1 pid=10042 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=66 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:32.542000 audit[10042]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb56e6290 a2=3 a3=0 items=0 ppid=1 pid=10042 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=66 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:33.016822 kernel: audit: type=1327 audit(1768874672.542:1308): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:04:32.542000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:04:33.017421 systemd[1]: Started session-66.scope - Session 66 of User core. Jan 20 02:04:33.057348 kernel: audit: type=1105 audit(1768874673.054:1309): pid=10042 uid=0 auid=500 ses=66 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:33.054000 audit[10042]: USER_START pid=10042 uid=0 auid=500 ses=66 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:33.209418 kernel: audit: type=1103 audit(1768874673.075:1310): pid=10054 uid=0 auid=500 ses=66 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:33.075000 audit[10054]: CRED_ACQ pid=10054 uid=0 auid=500 ses=66 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:33.816404 kubelet[3123]: E0120 02:04:33.809414 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8bc549748-txp25" podUID="44944462-7130-49ee-b7c5-4cb73dea6058" Jan 20 02:04:34.429293 sshd[10054]: Connection closed by 10.0.0.1 port 52326 Jan 20 02:04:34.442975 sshd-session[10042]: pam_unix(sshd:session): session closed for user core Jan 20 02:04:34.487000 audit[10042]: USER_END pid=10042 uid=0 auid=500 ses=66 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:34.522431 systemd[1]: sshd@65-10.0.0.44:22-10.0.0.1:52326.service: Deactivated successfully. Jan 20 02:04:34.537046 systemd[1]: session-66.scope: Deactivated successfully. Jan 20 02:04:34.589815 kernel: audit: type=1106 audit(1768874674.487:1311): pid=10042 uid=0 auid=500 ses=66 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:34.589990 kernel: audit: type=1104 audit(1768874674.488:1312): pid=10042 uid=0 auid=500 ses=66 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:34.488000 audit[10042]: CRED_DISP pid=10042 uid=0 auid=500 ses=66 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:34.598016 systemd-logind[1623]: Session 66 logged out. Waiting for processes to exit. Jan 20 02:04:34.645840 systemd-logind[1623]: Removed session 66. Jan 20 02:04:34.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@65-10.0.0.44:22-10.0.0.1:52326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:36.391000 audit[10068]: NETFILTER_CFG table=filter:150 family=2 entries=26 op=nft_register_rule pid=10068 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:04:36.391000 audit[10068]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc3e006d50 a2=0 a3=7ffc3e006d3c items=0 ppid=3237 pid=10068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:36.391000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:04:36.519000 audit[10068]: NETFILTER_CFG table=nat:151 family=2 entries=104 op=nft_register_chain pid=10068 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:04:36.519000 audit[10068]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc3e006d50 a2=0 a3=7ffc3e006d3c items=0 ppid=3237 pid=10068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:36.519000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:04:36.848058 kubelet[3123]: E0120 02:04:36.847988 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 02:04:37.820859 kubelet[3123]: E0120 02:04:37.805997 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 02:04:38.799162 kubelet[3123]: E0120 02:04:38.794581 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 02:04:39.600305 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 20 02:04:39.600510 kernel: audit: type=1130 audit(1768874679.489:1316): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@66-10.0.0.44:22-10.0.0.1:54098 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:39.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@66-10.0.0.44:22-10.0.0.1:54098 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:39.490808 systemd[1]: Started sshd@66-10.0.0.44:22-10.0.0.1:54098.service - OpenSSH per-connection server daemon (10.0.0.1:54098). Jan 20 02:04:40.096000 audit[10070]: USER_ACCT pid=10070 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:40.159651 sshd-session[10070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:04:40.189539 kernel: audit: type=1101 audit(1768874680.096:1317): pid=10070 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:40.194044 sshd[10070]: Accepted publickey for core from 10.0.0.1 port 54098 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:04:40.154000 audit[10070]: CRED_ACQ pid=10070 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:40.270493 kernel: audit: type=1103 audit(1768874680.154:1318): pid=10070 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:40.154000 audit[10070]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda5492e70 a2=3 a3=0 items=0 ppid=1 pid=10070 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=67 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:40.367562 systemd-logind[1623]: New session 67 of user core. Jan 20 02:04:40.446599 kernel: audit: type=1006 audit(1768874680.154:1319): pid=10070 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=67 res=1 Jan 20 02:04:40.446869 kernel: audit: type=1300 audit(1768874680.154:1319): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda5492e70 a2=3 a3=0 items=0 ppid=1 pid=10070 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=67 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:40.446921 kernel: audit: type=1327 audit(1768874680.154:1319): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:04:40.154000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:04:40.475145 systemd[1]: Started session-67.scope - Session 67 of User core. Jan 20 02:04:40.513000 audit[10070]: USER_START pid=10070 uid=0 auid=500 ses=67 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:40.532000 audit[10075]: CRED_ACQ pid=10075 uid=0 auid=500 ses=67 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:40.697524 kernel: audit: type=1105 audit(1768874680.513:1320): pid=10070 uid=0 auid=500 ses=67 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:40.697804 kernel: audit: type=1103 audit(1768874680.532:1321): pid=10075 uid=0 auid=500 ses=67 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:40.857135 kubelet[3123]: E0120 02:04:40.856095 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 02:04:41.707841 sshd[10075]: Connection closed by 10.0.0.1 port 54098 Jan 20 02:04:41.708144 sshd-session[10070]: pam_unix(sshd:session): session closed for user core Jan 20 02:04:41.751322 systemd[1]: sshd@66-10.0.0.44:22-10.0.0.1:54098.service: Deactivated successfully. Jan 20 02:04:41.744000 audit[10070]: USER_END pid=10070 uid=0 auid=500 ses=67 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:41.785340 systemd[1]: session-67.scope: Deactivated successfully. Jan 20 02:04:41.822870 kernel: audit: type=1106 audit(1768874681.744:1322): pid=10070 uid=0 auid=500 ses=67 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:41.845261 kernel: audit: type=1104 audit(1768874681.745:1323): pid=10070 uid=0 auid=500 ses=67 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:41.745000 audit[10070]: CRED_DISP pid=10070 uid=0 auid=500 ses=67 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:41.845496 systemd-logind[1623]: Session 67 logged out. Waiting for processes to exit. Jan 20 02:04:41.865468 systemd-logind[1623]: Removed session 67. Jan 20 02:04:41.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@66-10.0.0.44:22-10.0.0.1:54098 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:44.826862 kubelet[3123]: E0120 02:04:44.826348 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 02:04:46.785027 kubelet[3123]: E0120 02:04:46.782339 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:04:46.812576 systemd[1]: Started sshd@67-10.0.0.44:22-10.0.0.1:46406.service - OpenSSH per-connection server daemon (10.0.0.1:46406). Jan 20 02:04:46.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@67-10.0.0.44:22-10.0.0.1:46406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:46.869826 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:04:46.869975 kernel: audit: type=1130 audit(1768874686.812:1325): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@67-10.0.0.44:22-10.0.0.1:46406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:47.410000 audit[10089]: USER_ACCT pid=10089 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:47.413339 sshd[10089]: Accepted publickey for core from 10.0.0.1 port 46406 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:04:47.486262 sshd-session[10089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:04:47.452000 audit[10089]: CRED_ACQ pid=10089 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:47.591134 systemd-logind[1623]: New session 68 of user core. Jan 20 02:04:47.616158 kernel: audit: type=1101 audit(1768874687.410:1326): pid=10089 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:47.616397 kernel: audit: type=1103 audit(1768874687.452:1327): pid=10089 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:47.616430 kernel: audit: type=1006 audit(1768874687.452:1328): pid=10089 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=68 res=1 Jan 20 02:04:47.674130 kernel: audit: type=1300 audit(1768874687.452:1328): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe27aa4b90 a2=3 a3=0 items=0 ppid=1 pid=10089 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=68 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:47.452000 audit[10089]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe27aa4b90 a2=3 a3=0 items=0 ppid=1 pid=10089 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=68 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:47.805543 kernel: audit: type=1327 audit(1768874687.452:1328): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:04:47.452000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:04:47.839360 systemd[1]: Started session-68.scope - Session 68 of User core. Jan 20 02:04:47.908322 kubelet[3123]: E0120 02:04:47.907388 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8bc549748-txp25" podUID="44944462-7130-49ee-b7c5-4cb73dea6058" Jan 20 02:04:47.911000 audit[10089]: USER_START pid=10089 uid=0 auid=500 ses=68 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:48.005959 kernel: audit: type=1105 audit(1768874687.911:1329): pid=10089 uid=0 auid=500 ses=68 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:47.941000 audit[10108]: CRED_ACQ pid=10108 uid=0 auid=500 ses=68 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:48.146044 kernel: audit: type=1103 audit(1768874687.941:1330): pid=10108 uid=0 auid=500 ses=68 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:49.443098 sshd[10108]: Connection closed by 10.0.0.1 port 46406 Jan 20 02:04:49.449539 sshd-session[10089]: pam_unix(sshd:session): session closed for user core Jan 20 02:04:49.459000 audit[10089]: USER_END pid=10089 uid=0 auid=500 ses=68 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:49.531533 systemd[1]: sshd@67-10.0.0.44:22-10.0.0.1:46406.service: Deactivated successfully. Jan 20 02:04:49.603435 systemd[1]: session-68.scope: Deactivated successfully. Jan 20 02:04:49.633250 kernel: audit: type=1106 audit(1768874689.459:1331): pid=10089 uid=0 auid=500 ses=68 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:49.472000 audit[10089]: CRED_DISP pid=10089 uid=0 auid=500 ses=68 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:49.634038 systemd-logind[1623]: Session 68 logged out. Waiting for processes to exit. Jan 20 02:04:49.698875 systemd-logind[1623]: Removed session 68. Jan 20 02:04:49.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@67-10.0.0.44:22-10.0.0.1:46406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:49.771353 kernel: audit: type=1104 audit(1768874689.472:1332): pid=10089 uid=0 auid=500 ses=68 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:49.911799 kubelet[3123]: E0120 02:04:49.905298 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 02:04:50.823893 kubelet[3123]: E0120 02:04:50.810442 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 02:04:53.830451 kubelet[3123]: E0120 02:04:53.824651 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:04:53.845798 kubelet[3123]: E0120 02:04:53.842605 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 02:04:54.524046 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:04:54.537914 kernel: audit: type=1130 audit(1768874694.498:1334): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@68-10.0.0.44:22-10.0.0.1:44984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:54.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@68-10.0.0.44:22-10.0.0.1:44984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:54.499403 systemd[1]: Started sshd@68-10.0.0.44:22-10.0.0.1:44984.service - OpenSSH per-connection server daemon (10.0.0.1:44984). Jan 20 02:04:54.896000 audit[10132]: USER_ACCT pid=10132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:54.953232 kernel: audit: type=1101 audit(1768874694.896:1335): pid=10132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:54.953469 sshd[10132]: Accepted publickey for core from 10.0.0.1 port 44984 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:04:54.960000 audit[10132]: CRED_ACQ pid=10132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:54.965065 sshd-session[10132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:04:55.001127 systemd-logind[1623]: New session 69 of user core. Jan 20 02:04:55.026870 kernel: audit: type=1103 audit(1768874694.960:1336): pid=10132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:55.027152 kernel: audit: type=1006 audit(1768874694.960:1337): pid=10132 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=69 res=1 Jan 20 02:04:54.960000 audit[10132]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffee5e508c0 a2=3 a3=0 items=0 ppid=1 pid=10132 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=69 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:55.064446 systemd[1]: Started session-69.scope - Session 69 of User core. Jan 20 02:04:55.090855 kernel: audit: type=1300 audit(1768874694.960:1337): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffee5e508c0 a2=3 a3=0 items=0 ppid=1 pid=10132 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=69 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:54.960000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:04:55.111888 kernel: audit: type=1327 audit(1768874694.960:1337): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:04:55.123657 kernel: audit: type=1105 audit(1768874695.102:1338): pid=10132 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:55.102000 audit[10132]: USER_START pid=10132 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:55.144000 audit[10135]: CRED_ACQ pid=10135 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:55.218868 kernel: audit: type=1103 audit(1768874695.144:1339): pid=10135 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:56.009963 kubelet[3123]: E0120 02:04:56.009584 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 02:04:56.028337 sshd[10135]: Connection closed by 10.0.0.1 port 44984 Jan 20 02:04:56.028066 sshd-session[10132]: pam_unix(sshd:session): session closed for user core Jan 20 02:04:56.031000 audit[10132]: USER_END pid=10132 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:56.107312 systemd[1]: sshd@68-10.0.0.44:22-10.0.0.1:44984.service: Deactivated successfully. Jan 20 02:04:56.161349 kernel: audit: type=1106 audit(1768874696.031:1340): pid=10132 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:56.143052 systemd[1]: session-69.scope: Deactivated successfully. Jan 20 02:04:56.031000 audit[10132]: CRED_DISP pid=10132 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:56.186425 systemd-logind[1623]: Session 69 logged out. Waiting for processes to exit. Jan 20 02:04:56.199851 kernel: audit: type=1104 audit(1768874696.031:1341): pid=10132 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:56.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@68-10.0.0.44:22-10.0.0.1:44984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:56.249308 systemd-logind[1623]: Removed session 69. Jan 20 02:04:59.770829 kubelet[3123]: E0120 02:04:59.770526 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:04:59.812564 kubelet[3123]: E0120 02:04:59.809841 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 02:05:00.794573 kubelet[3123]: E0120 02:05:00.775938 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:05:01.134204 systemd[1]: Started sshd@69-10.0.0.44:22-10.0.0.1:44994.service - OpenSSH per-connection server daemon (10.0.0.1:44994). Jan 20 02:05:01.198417 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:05:01.198498 kernel: audit: type=1130 audit(1768874701.133:1343): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@69-10.0.0.44:22-10.0.0.1:44994 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:05:01.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@69-10.0.0.44:22-10.0.0.1:44994 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:05:01.660000 audit[10148]: USER_ACCT pid=10148 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:01.725248 kernel: audit: type=1101 audit(1768874701.660:1344): pid=10148 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:01.686808 sshd-session[10148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:05:01.731052 sshd[10148]: Accepted publickey for core from 10.0.0.1 port 44994 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:05:01.672000 audit[10148]: CRED_ACQ pid=10148 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:01.740212 systemd-logind[1623]: New session 70 of user core. Jan 20 02:05:01.844556 kernel: audit: type=1103 audit(1768874701.672:1345): pid=10148 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:01.844879 kernel: audit: type=1006 audit(1768874701.680:1346): pid=10148 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=70 res=1 Jan 20 02:05:01.844944 kernel: audit: type=1300 audit(1768874701.680:1346): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea9d5d050 a2=3 a3=0 items=0 ppid=1 pid=10148 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=70 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:05:01.680000 audit[10148]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea9d5d050 a2=3 a3=0 items=0 ppid=1 pid=10148 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=70 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:05:01.680000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:05:01.896660 systemd[1]: Started session-70.scope - Session 70 of User core. Jan 20 02:05:01.910923 kernel: audit: type=1327 audit(1768874701.680:1346): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:05:01.950000 audit[10148]: USER_START pid=10148 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:02.041994 kernel: audit: type=1105 audit(1768874701.950:1347): pid=10148 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:02.135229 kernel: audit: type=1103 audit(1768874702.041:1348): pid=10151 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:02.041000 audit[10151]: CRED_ACQ pid=10151 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:02.806978 kubelet[3123]: E0120 02:05:02.806905 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8bc549748-txp25" podUID="44944462-7130-49ee-b7c5-4cb73dea6058" Jan 20 02:05:03.069999 sshd[10151]: Connection closed by 10.0.0.1 port 44994 Jan 20 02:05:03.084382 sshd-session[10148]: pam_unix(sshd:session): session closed for user core Jan 20 02:05:03.092000 audit[10148]: USER_END pid=10148 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:03.126856 systemd[1]: sshd@69-10.0.0.44:22-10.0.0.1:44994.service: Deactivated successfully. Jan 20 02:05:03.179012 systemd[1]: session-70.scope: Deactivated successfully. Jan 20 02:05:03.193229 kernel: audit: type=1106 audit(1768874703.092:1349): pid=10148 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:03.202304 kernel: audit: type=1104 audit(1768874703.098:1350): pid=10148 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:03.098000 audit[10148]: CRED_DISP pid=10148 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:03.190023 systemd-logind[1623]: Session 70 logged out. Waiting for processes to exit. Jan 20 02:05:03.196151 systemd-logind[1623]: Removed session 70. Jan 20 02:05:03.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@69-10.0.0.44:22-10.0.0.1:44994 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:05:04.781398 kubelet[3123]: E0120 02:05:04.778769 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 02:05:04.815632 kubelet[3123]: E0120 02:05:04.815027 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 02:05:06.835202 kubelet[3123]: E0120 02:05:06.834901 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 02:05:08.143004 systemd[1]: Started sshd@70-10.0.0.44:22-10.0.0.1:48090.service - OpenSSH per-connection server daemon (10.0.0.1:48090). Jan 20 02:05:08.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@70-10.0.0.44:22-10.0.0.1:48090 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:05:08.233807 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:05:08.234018 kernel: audit: type=1130 audit(1768874708.142:1352): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@70-10.0.0.44:22-10.0.0.1:48090 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:05:08.648000 audit[10164]: USER_ACCT pid=10164 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:08.738402 kernel: audit: type=1101 audit(1768874708.648:1353): pid=10164 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:08.738467 sshd[10164]: Accepted publickey for core from 10.0.0.1 port 48090 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:05:08.654081 sshd-session[10164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:05:08.728953 systemd-logind[1623]: New session 71 of user core. Jan 20 02:05:08.801814 kernel: audit: type=1103 audit(1768874708.651:1354): pid=10164 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:08.651000 audit[10164]: CRED_ACQ pid=10164 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:08.745284 systemd[1]: Started session-71.scope - Session 71 of User core. Jan 20 02:05:08.815464 kubelet[3123]: E0120 02:05:08.809851 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 02:05:08.855326 kernel: audit: type=1006 audit(1768874708.651:1355): pid=10164 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=71 res=1 Jan 20 02:05:08.651000 audit[10164]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffffbfd31f0 a2=3 a3=0 items=0 ppid=1 pid=10164 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=71 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:05:08.947861 kernel: audit: type=1300 audit(1768874708.651:1355): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffffbfd31f0 a2=3 a3=0 items=0 ppid=1 pid=10164 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=71 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:05:08.651000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:05:08.992432 kernel: audit: type=1327 audit(1768874708.651:1355): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:05:08.759000 audit[10164]: USER_START pid=10164 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:09.113397 kernel: audit: type=1105 audit(1768874708.759:1356): pid=10164 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:09.119051 kernel: audit: type=1103 audit(1768874708.802:1357): pid=10167 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:08.802000 audit[10167]: CRED_ACQ pid=10167 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:09.698371 sshd[10167]: Connection closed by 10.0.0.1 port 48090 Jan 20 02:05:09.709865 sshd-session[10164]: pam_unix(sshd:session): session closed for user core Jan 20 02:05:09.730000 audit[10164]: USER_END pid=10164 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:09.906476 kernel: audit: type=1106 audit(1768874709.730:1358): pid=10164 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:10.649520 kernel: audit: type=1104 audit(1768874709.730:1359): pid=10164 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:09.730000 audit[10164]: CRED_DISP pid=10164 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:11.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@70-10.0.0.44:22-10.0.0.1:48090 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:05:11.746495 systemd[1]: sshd@70-10.0.0.44:22-10.0.0.1:48090.service: Deactivated successfully. Jan 20 02:05:12.652476 systemd[1]: session-71.scope: Deactivated successfully. Jan 20 02:05:13.972255 systemd-logind[1623]: Session 71 logged out. Waiting for processes to exit. Jan 20 02:05:16.129153 systemd-logind[1623]: Removed session 71. Jan 20 02:05:16.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@71-10.0.0.44:22-10.0.0.1:58770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:05:16.532633 systemd[1]: Started sshd@71-10.0.0.44:22-10.0.0.1:58770.service - OpenSSH per-connection server daemon (10.0.0.1:58770). Jan 20 02:05:16.593854 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:05:16.614002 kernel: audit: type=1130 audit(1768874716.532:1361): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@71-10.0.0.44:22-10.0.0.1:58770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:05:18.295798 kubelet[3123]: E0120 02:05:18.247457 3123 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.475s" Jan 20 02:05:18.441840 kubelet[3123]: E0120 02:05:18.441495 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-r7ptx" podUID="03f653dd-0210-41e9-9d70-a3905826baa1" Jan 20 02:05:18.441840 kubelet[3123]: E0120 02:05:18.441657 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b798b596-wbvft" podUID="589f656f-1e0a-4667-bc0d-42908aab3340" Jan 20 02:05:18.556000 audit[10182]: USER_ACCT pid=10182 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:18.699433 kernel: audit: type=1101 audit(1768874718.556:1362): pid=10182 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:18.586577 sshd-session[10182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:05:18.700396 sshd[10182]: Accepted publickey for core from 10.0.0.1 port 58770 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:05:18.700782 kubelet[3123]: E0120 02:05:18.546299 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x6f5h" podUID="eeb09d5e-8a63-4fca-910b-ea49fa1ecf05" Jan 20 02:05:18.700782 kubelet[3123]: E0120 02:05:18.632386 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8bc549748-txp25" podUID="44944462-7130-49ee-b7c5-4cb73dea6058" Jan 20 02:05:18.579000 audit[10182]: CRED_ACQ pid=10182 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:18.842307 kernel: audit: type=1103 audit(1768874718.579:1363): pid=10182 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:18.845104 systemd-logind[1623]: New session 72 of user core. Jan 20 02:05:18.879215 systemd[1]: Started session-72.scope - Session 72 of User core. Jan 20 02:05:18.952062 kernel: audit: type=1006 audit(1768874718.579:1364): pid=10182 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=72 res=1 Jan 20 02:05:18.579000 audit[10182]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7f91c9a0 a2=3 a3=0 items=0 ppid=1 pid=10182 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=72 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:05:19.066082 kernel: audit: type=1300 audit(1768874718.579:1364): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7f91c9a0 a2=3 a3=0 items=0 ppid=1 pid=10182 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=72 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:05:18.579000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:05:18.959000 audit[10182]: USER_START pid=10182 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:19.314919 kernel: audit: type=1327 audit(1768874718.579:1364): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:05:19.315062 kernel: audit: type=1105 audit(1768874718.959:1365): pid=10182 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:19.315122 kernel: audit: type=1103 audit(1768874718.984:1366): pid=10196 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:18.984000 audit[10196]: CRED_ACQ pid=10196 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:20.457337 sshd[10196]: Connection closed by 10.0.0.1 port 58770 Jan 20 02:05:20.462640 sshd-session[10182]: pam_unix(sshd:session): session closed for user core Jan 20 02:05:20.488000 audit[10182]: USER_END pid=10182 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:20.591491 kernel: audit: type=1106 audit(1768874720.488:1367): pid=10182 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:20.572964 systemd-logind[1623]: Session 72 logged out. Waiting for processes to exit. Jan 20 02:05:20.511000 audit[10182]: CRED_DISP pid=10182 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:20.598624 systemd[1]: sshd@71-10.0.0.44:22-10.0.0.1:58770.service: Deactivated successfully. Jan 20 02:05:20.659405 systemd[1]: session-72.scope: Deactivated successfully. Jan 20 02:05:20.673344 kernel: audit: type=1104 audit(1768874720.511:1368): pid=10182 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:20.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@71-10.0.0.44:22-10.0.0.1:58770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:05:20.717432 systemd-logind[1623]: Removed session 72. Jan 20 02:05:21.831871 kubelet[3123]: E0120 02:05:21.830625 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-szpvj" podUID="285811f9-e547-431f-a7b0-90e1226d2f4d" Jan 20 02:05:21.839615 kubelet[3123]: E0120 02:05:21.839043 3123 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86d7bc7b4f-k5t2j" podUID="68cbc571-4445-4166-912c-8fdfe252aae2" Jan 20 02:05:23.774918 kubelet[3123]: E0120 02:05:23.770669 3123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:05:25.583113 systemd[1]: Started sshd@72-10.0.0.44:22-10.0.0.1:37754.service - OpenSSH per-connection server daemon (10.0.0.1:37754). Jan 20 02:05:25.606912 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:05:25.607079 kernel: audit: type=1130 audit(1768874725.577:1370): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@72-10.0.0.44:22-10.0.0.1:37754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:05:25.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@72-10.0.0.44:22-10.0.0.1:37754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:05:26.061977 sshd[10226]: Accepted publickey for core from 10.0.0.1 port 37754 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:05:26.060000 audit[10226]: USER_ACCT pid=10226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:26.094302 sshd-session[10226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:05:26.167492 kernel: audit: type=1101 audit(1768874726.060:1371): pid=10226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:26.092000 audit[10226]: CRED_ACQ pid=10226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:26.240389 systemd-logind[1623]: New session 73 of user core. Jan 20 02:05:26.287102 kernel: audit: type=1103 audit(1768874726.092:1372): pid=10226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:26.308243 systemd[1]: Started session-73.scope - Session 73 of User core. Jan 20 02:05:26.375185 kernel: audit: type=1006 audit(1768874726.092:1373): pid=10226 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=73 res=1 Jan 20 02:05:26.375336 kernel: audit: type=1300 audit(1768874726.092:1373): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff60781d90 a2=3 a3=0 items=0 ppid=1 pid=10226 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=73 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:05:26.092000 audit[10226]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff60781d90 a2=3 a3=0 items=0 ppid=1 pid=10226 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=73 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:05:26.092000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:05:26.524133 kernel: audit: type=1327 audit(1768874726.092:1373): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:05:26.347000 audit[10226]: USER_START pid=10226 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:26.621069 kernel: audit: type=1105 audit(1768874726.347:1374): pid=10226 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:26.404000 audit[10229]: CRED_ACQ pid=10229 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:26.725434 kernel: audit: type=1103 audit(1768874726.404:1375): pid=10229 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:27.164278 sshd[10229]: Connection closed by 10.0.0.1 port 37754 Jan 20 02:05:27.166940 sshd-session[10226]: pam_unix(sshd:session): session closed for user core Jan 20 02:05:27.195000 audit[10226]: USER_END pid=10226 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:27.204562 systemd[1]: sshd@72-10.0.0.44:22-10.0.0.1:37754.service: Deactivated successfully. Jan 20 02:05:27.221614 systemd[1]: session-73.scope: Deactivated successfully. Jan 20 02:05:27.234450 systemd-logind[1623]: Session 73 logged out. Waiting for processes to exit. Jan 20 02:05:27.240163 systemd-logind[1623]: Removed session 73. Jan 20 02:05:27.195000 audit[10226]: CRED_DISP pid=10226 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:27.373062 kernel: audit: type=1106 audit(1768874727.195:1376): pid=10226 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:27.373225 kernel: audit: type=1104 audit(1768874727.195:1377): pid=10226 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:27.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@72-10.0.0.44:22-10.0.0.1:37754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'