Jan 20 02:42:34.277298 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 19 22:31:13 -00 2026 Jan 20 02:42:34.277342 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=62ba7e41f0b11e3efb63e9a03ee9de0b370deb0ea547dd39e8d3060b03ecf9e8 Jan 20 02:42:34.277360 kernel: BIOS-provided physical RAM map: Jan 20 02:42:34.277369 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 20 02:42:34.277378 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 20 02:42:34.277388 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 20 02:42:34.277400 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 20 02:42:34.277409 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 20 02:42:34.277417 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 20 02:42:34.277425 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 20 02:42:34.277438 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 02:42:34.277447 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 20 02:42:34.277459 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 02:42:34.277469 kernel: NX (Execute Disable) protection: active Jan 20 02:42:34.277536 kernel: APIC: Static calls initialized Jan 20 02:42:34.277550 kernel: SMBIOS 2.8 present. Jan 20 02:42:34.277559 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 20 02:42:34.277569 kernel: DMI: Memory slots populated: 1/1 Jan 20 02:42:34.281678 kernel: Hypervisor detected: KVM Jan 20 02:42:34.281691 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 20 02:42:34.281700 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 02:42:34.281710 kernel: kvm-clock: using sched offset of 47103924647 cycles Jan 20 02:42:34.281721 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 02:42:34.281732 kernel: tsc: Detected 2445.426 MHz processor Jan 20 02:42:34.281754 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 02:42:34.281765 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 02:42:34.281775 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 20 02:42:34.281786 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 20 02:42:34.281796 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 02:42:34.281807 kernel: Using GB pages for direct mapping Jan 20 02:42:34.281818 kernel: ACPI: Early table checksum verification disabled Jan 20 02:42:34.281834 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 20 02:42:34.281846 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:42:34.281859 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:42:34.281871 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:42:34.281883 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 20 02:42:34.281895 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:42:34.281907 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:42:34.281924 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:42:34.281937 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:42:34.281952 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 20 02:42:34.281963 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 20 02:42:34.281973 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 20 02:42:34.281988 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 20 02:42:34.282001 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 20 02:42:34.282011 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 20 02:42:34.282021 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 20 02:42:34.282030 kernel: No NUMA configuration found Jan 20 02:42:34.282041 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 20 02:42:34.282058 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 20 02:42:34.282071 kernel: Zone ranges: Jan 20 02:42:34.282081 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 02:42:34.282091 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 20 02:42:34.282101 kernel: Normal empty Jan 20 02:42:34.282112 kernel: Device empty Jan 20 02:42:34.282123 kernel: Movable zone start for each node Jan 20 02:42:34.282136 kernel: Early memory node ranges Jan 20 02:42:34.282152 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 20 02:42:34.282161 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 20 02:42:34.282171 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 20 02:42:34.282181 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 02:42:34.282193 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 20 02:42:34.282206 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 20 02:42:34.282218 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 02:42:34.282232 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 02:42:34.282242 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 02:42:34.282253 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 02:42:34.282264 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 02:42:34.282277 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 02:42:34.282288 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 02:42:34.282298 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 02:42:34.282312 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 02:42:34.282323 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 20 02:42:34.282336 kernel: TSC deadline timer available Jan 20 02:42:34.282348 kernel: CPU topo: Max. logical packages: 1 Jan 20 02:42:34.282357 kernel: CPU topo: Max. logical dies: 1 Jan 20 02:42:34.282367 kernel: CPU topo: Max. dies per package: 1 Jan 20 02:42:34.282377 kernel: CPU topo: Max. threads per core: 1 Jan 20 02:42:34.282387 kernel: CPU topo: Num. cores per package: 4 Jan 20 02:42:34.282405 kernel: CPU topo: Num. threads per package: 4 Jan 20 02:42:34.282418 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 20 02:42:34.282429 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 02:42:34.282439 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 20 02:42:34.282448 kernel: kvm-guest: setup PV sched yield Jan 20 02:42:34.282459 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 20 02:42:34.282525 kernel: Booting paravirtualized kernel on KVM Jan 20 02:42:34.282546 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 02:42:34.282558 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 20 02:42:34.282568 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 20 02:42:34.282620 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 20 02:42:34.282632 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 20 02:42:34.282641 kernel: kvm-guest: PV spinlocks enabled Jan 20 02:42:34.282652 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 02:42:34.282668 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=62ba7e41f0b11e3efb63e9a03ee9de0b370deb0ea547dd39e8d3060b03ecf9e8 Jan 20 02:42:34.282681 kernel: random: crng init done Jan 20 02:42:34.282693 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 02:42:34.282703 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 02:42:34.282713 kernel: Fallback order for Node 0: 0 Jan 20 02:42:34.282723 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 20 02:42:34.282734 kernel: Policy zone: DMA32 Jan 20 02:42:34.282750 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 02:42:34.282760 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 20 02:42:34.282770 kernel: ftrace: allocating 40097 entries in 157 pages Jan 20 02:42:34.282780 kernel: ftrace: allocated 157 pages with 5 groups Jan 20 02:42:34.282793 kernel: Dynamic Preempt: voluntary Jan 20 02:42:34.282806 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 02:42:34.282817 kernel: rcu: RCU event tracing is enabled. Jan 20 02:42:34.282832 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 20 02:42:34.282843 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 02:42:34.282856 kernel: Rude variant of Tasks RCU enabled. Jan 20 02:42:34.282867 kernel: Tracing variant of Tasks RCU enabled. Jan 20 02:42:34.282876 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 02:42:34.282887 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 20 02:42:34.282897 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 02:42:34.282914 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 02:42:34.282927 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 02:42:34.282937 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 20 02:42:34.282947 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 02:42:34.282967 kernel: Console: colour VGA+ 80x25 Jan 20 02:42:34.282984 kernel: printk: legacy console [ttyS0] enabled Jan 20 02:42:34.282998 kernel: ACPI: Core revision 20240827 Jan 20 02:42:34.283009 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 20 02:42:34.283019 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 02:42:34.283034 kernel: x2apic enabled Jan 20 02:42:34.283046 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 02:42:34.283060 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 20 02:42:34.283071 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 20 02:42:34.283085 kernel: kvm-guest: setup PV IPIs Jan 20 02:42:34.283096 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 20 02:42:34.283109 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 02:42:34.283121 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 20 02:42:34.283132 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 02:42:34.283142 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 20 02:42:34.283153 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 20 02:42:34.283171 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 02:42:34.283181 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 02:42:34.283192 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 02:42:34.283202 kernel: Speculative Store Bypass: Vulnerable Jan 20 02:42:34.283214 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 20 02:42:34.283229 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 20 02:42:34.283239 kernel: active return thunk: srso_alias_return_thunk Jan 20 02:42:34.283254 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 20 02:42:34.283265 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 20 02:42:34.283277 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 02:42:34.283291 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 02:42:34.283305 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 02:42:34.283316 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 02:42:34.283327 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 02:42:34.283344 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 20 02:42:34.283358 kernel: Freeing SMP alternatives memory: 32K Jan 20 02:42:34.283369 kernel: pid_max: default: 32768 minimum: 301 Jan 20 02:42:34.283379 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 20 02:42:34.283392 kernel: landlock: Up and running. Jan 20 02:42:34.283404 kernel: SELinux: Initializing. Jan 20 02:42:34.283415 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 02:42:34.283430 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 02:42:34.283441 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 20 02:42:34.283452 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 20 02:42:34.283465 kernel: signal: max sigframe size: 1776 Jan 20 02:42:34.283535 kernel: rcu: Hierarchical SRCU implementation. Jan 20 02:42:34.283547 kernel: rcu: Max phase no-delay instances is 400. Jan 20 02:42:34.283558 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 20 02:42:34.283611 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 02:42:34.283622 kernel: smp: Bringing up secondary CPUs ... Jan 20 02:42:34.283633 kernel: smpboot: x86: Booting SMP configuration: Jan 20 02:42:34.283643 kernel: .... node #0, CPUs: #1 #2 #3 Jan 20 02:42:34.283653 kernel: smp: Brought up 1 node, 4 CPUs Jan 20 02:42:34.283664 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 20 02:42:34.283678 kernel: Memory: 2447336K/2571752K available (14336K kernel code, 2445K rwdata, 29896K rodata, 15436K init, 2604K bss, 118476K reserved, 0K cma-reserved) Jan 20 02:42:34.283694 kernel: devtmpfs: initialized Jan 20 02:42:34.283704 kernel: x86/mm: Memory block size: 128MB Jan 20 02:42:34.283714 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 02:42:34.283725 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 20 02:42:34.283738 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 02:42:34.283749 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 02:42:34.283760 kernel: audit: initializing netlink subsys (disabled) Jan 20 02:42:34.283775 kernel: audit: type=2000 audit(1768876927.834:1): state=initialized audit_enabled=0 res=1 Jan 20 02:42:34.283787 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 02:42:34.283797 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 02:42:34.283808 kernel: cpuidle: using governor menu Jan 20 02:42:34.283821 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 02:42:34.283834 kernel: dca service started, version 1.12.1 Jan 20 02:42:34.283846 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 20 02:42:34.283861 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 20 02:42:34.283874 kernel: PCI: Using configuration type 1 for base access Jan 20 02:42:34.283886 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 02:42:34.283898 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 02:42:34.283910 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 02:42:34.283922 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 02:42:34.283934 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 02:42:34.283949 kernel: ACPI: Added _OSI(Module Device) Jan 20 02:42:34.283961 kernel: ACPI: Added _OSI(Processor Device) Jan 20 02:42:34.283973 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 02:42:34.283983 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 02:42:34.283995 kernel: ACPI: Interpreter enabled Jan 20 02:42:34.284007 kernel: ACPI: PM: (supports S0 S3 S5) Jan 20 02:42:34.284020 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 02:42:34.284036 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 02:42:34.284047 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 02:42:34.284057 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 02:42:34.284070 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 02:42:34.284532 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 02:42:34.296769 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 20 02:42:34.297062 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 20 02:42:34.297084 kernel: PCI host bridge to bus 0000:00 Jan 20 02:42:34.297319 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 02:42:34.297637 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 02:42:34.297859 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 02:42:34.298092 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 20 02:42:34.298323 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 20 02:42:34.307854 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 20 02:42:34.308154 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 02:42:34.308434 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 20 02:42:34.313249 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 20 02:42:34.316843 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 20 02:42:34.317122 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 20 02:42:34.317369 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 20 02:42:34.317721 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 02:42:34.317975 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 12695 usecs Jan 20 02:42:34.318249 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 20 02:42:34.318568 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 20 02:42:34.318861 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 20 02:42:34.319105 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 20 02:42:34.319351 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 20 02:42:34.327994 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 20 02:42:34.328306 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 20 02:42:34.336980 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 20 02:42:34.337284 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 20 02:42:34.337655 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 20 02:42:34.337903 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 20 02:42:34.338153 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 20 02:42:34.338413 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 20 02:42:34.338781 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 20 02:42:34.339031 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 02:42:34.339279 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 13671 usecs Jan 20 02:42:34.349890 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 20 02:42:34.350217 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 20 02:42:34.350538 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 20 02:42:34.352982 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 20 02:42:34.353244 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 20 02:42:34.353262 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 02:42:34.353276 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 02:42:34.353288 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 02:42:34.353310 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 02:42:34.353321 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 02:42:34.353332 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 02:42:34.353343 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 02:42:34.353357 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 02:42:34.353369 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 02:42:34.353380 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 02:42:34.353395 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 02:42:34.353408 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 02:42:34.353419 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 02:42:34.353429 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 02:42:34.353441 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 02:42:34.353455 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 02:42:34.353466 kernel: iommu: Default domain type: Translated Jan 20 02:42:34.353536 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 02:42:34.353548 kernel: PCI: Using ACPI for IRQ routing Jan 20 02:42:34.353560 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 02:42:34.361065 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 20 02:42:34.361108 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 20 02:42:34.361418 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 02:42:34.377954 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 02:42:34.378225 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 02:42:34.378254 kernel: vgaarb: loaded Jan 20 02:42:34.378267 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 20 02:42:34.378279 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 20 02:42:34.378292 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 02:42:34.378304 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 02:42:34.378317 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 02:42:34.378329 kernel: pnp: PnP ACPI init Jan 20 02:42:34.378729 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 20 02:42:34.378749 kernel: pnp: PnP ACPI: found 6 devices Jan 20 02:42:34.378762 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 02:42:34.378777 kernel: NET: Registered PF_INET protocol family Jan 20 02:42:34.378789 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 02:42:34.378801 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 02:42:34.378819 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 02:42:34.378832 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 02:42:34.378845 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 02:42:34.378857 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 02:42:34.378870 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 02:42:34.378882 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 02:42:34.378893 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 02:42:34.378909 kernel: NET: Registered PF_XDP protocol family Jan 20 02:42:34.379140 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 02:42:34.379348 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 02:42:34.379674 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 02:42:34.379908 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 20 02:42:34.380141 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 20 02:42:34.380362 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 20 02:42:34.380389 kernel: PCI: CLS 0 bytes, default 64 Jan 20 02:42:34.380403 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 02:42:34.380415 kernel: Initialise system trusted keyrings Jan 20 02:42:34.380428 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 02:42:34.380441 kernel: Key type asymmetric registered Jan 20 02:42:34.380454 kernel: Asymmetric key parser 'x509' registered Jan 20 02:42:34.380468 kernel: hrtimer: interrupt took 5037453 ns Jan 20 02:42:34.380545 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 20 02:42:34.380557 kernel: io scheduler mq-deadline registered Jan 20 02:42:34.380567 kernel: io scheduler kyber registered Jan 20 02:42:34.386764 kernel: io scheduler bfq registered Jan 20 02:42:34.386781 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 02:42:34.386794 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 02:42:34.386806 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 02:42:34.386828 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 02:42:34.386840 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 02:42:34.386854 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 02:42:34.386867 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 02:42:34.386879 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 02:42:34.386891 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 02:42:34.387229 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 20 02:42:34.387255 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 20 02:42:34.393768 kernel: rtc_cmos 00:04: registered as rtc0 Jan 20 02:42:34.394086 kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T02:42:26 UTC (1768876946) Jan 20 02:42:34.394302 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 20 02:42:34.394318 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 20 02:42:34.394330 kernel: NET: Registered PF_INET6 protocol family Jan 20 02:42:34.394341 kernel: Segment Routing with IPv6 Jan 20 02:42:34.394363 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 02:42:34.394374 kernel: NET: Registered PF_PACKET protocol family Jan 20 02:42:34.394385 kernel: Key type dns_resolver registered Jan 20 02:42:34.394396 kernel: IPI shorthand broadcast: enabled Jan 20 02:42:34.394408 kernel: sched_clock: Marking stable (9662114070, 5398823001)->(20110941399, -5050004328) Jan 20 02:42:34.394419 kernel: registered taskstats version 1 Jan 20 02:42:34.394431 kernel: Loading compiled-in X.509 certificates Jan 20 02:42:34.394447 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: afdfbfc7519ef3fa38aa4389b822f24e81c62f9e' Jan 20 02:42:34.394462 kernel: Demotion targets for Node 0: null Jan 20 02:42:34.394553 kernel: Key type .fscrypt registered Jan 20 02:42:34.394566 kernel: Key type fscrypt-provisioning registered Jan 20 02:42:34.394614 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 02:42:34.394628 kernel: ima: Allocated hash algorithm: sha1 Jan 20 02:42:34.394639 kernel: ima: No architecture policies found Jan 20 02:42:34.394655 kernel: clk: Disabling unused clocks Jan 20 02:42:34.394666 kernel: Freeing unused kernel image (initmem) memory: 15436K Jan 20 02:42:34.394676 kernel: Write protecting the kernel read-only data: 45056k Jan 20 02:42:34.394690 kernel: Freeing unused kernel image (rodata/data gap) memory: 824K Jan 20 02:42:34.394701 kernel: Run /init as init process Jan 20 02:42:34.394712 kernel: with arguments: Jan 20 02:42:34.394722 kernel: /init Jan 20 02:42:34.394737 kernel: with environment: Jan 20 02:42:34.394751 kernel: HOME=/ Jan 20 02:42:34.394762 kernel: TERM=linux Jan 20 02:42:34.394773 kernel: SCSI subsystem initialized Jan 20 02:42:34.394784 kernel: libata version 3.00 loaded. Jan 20 02:42:34.395045 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 02:42:34.395066 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 02:42:34.395314 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 20 02:42:34.395656 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 20 02:42:34.395898 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 02:42:34.396171 kernel: scsi host0: ahci Jan 20 02:42:34.396463 kernel: scsi host1: ahci Jan 20 02:42:34.408435 kernel: scsi host2: ahci Jan 20 02:42:34.408865 kernel: scsi host3: ahci Jan 20 02:42:34.409139 kernel: scsi host4: ahci Jan 20 02:42:34.409397 kernel: scsi host5: ahci Jan 20 02:42:34.409418 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Jan 20 02:42:34.409434 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Jan 20 02:42:34.409460 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Jan 20 02:42:34.409530 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Jan 20 02:42:34.409545 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Jan 20 02:42:34.409559 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Jan 20 02:42:34.409612 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 20 02:42:34.409625 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 02:42:34.409638 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 02:42:34.409657 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 02:42:34.409668 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 02:42:34.409679 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 02:42:34.409690 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 02:42:34.409701 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 20 02:42:34.409714 kernel: ata3.00: applying bridge limits Jan 20 02:42:34.409728 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 02:42:34.409743 kernel: ata3.00: configured for UDMA/100 Jan 20 02:42:34.410037 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 20 02:42:34.410309 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 20 02:42:34.410663 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 20 02:42:34.410945 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 20 02:42:34.410968 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 02:42:34.410986 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 02:42:34.410997 kernel: GPT:16515071 != 27000831 Jan 20 02:42:34.411008 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 02:42:34.411022 kernel: GPT:16515071 != 27000831 Jan 20 02:42:34.411036 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 02:42:34.411048 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 02:42:34.411327 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 20 02:42:34.411351 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 02:42:34.411365 kernel: device-mapper: uevent: version 1.0.3 Jan 20 02:42:34.411380 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 20 02:42:34.411392 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 20 02:42:34.411408 kernel: raid6: avx2x4 gen() 12243 MB/s Jan 20 02:42:34.411419 kernel: raid6: avx2x2 gen() 11400 MB/s Jan 20 02:42:34.411433 kernel: raid6: avx2x1 gen() 5917 MB/s Jan 20 02:42:34.411449 kernel: raid6: using algorithm avx2x4 gen() 12243 MB/s Jan 20 02:42:34.411460 kernel: raid6: .... xor() 1824 MB/s, rmw enabled Jan 20 02:42:34.411524 kernel: raid6: using avx2x2 recovery algorithm Jan 20 02:42:34.411540 kernel: xor: automatically using best checksumming function avx Jan 20 02:42:34.411563 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 02:42:34.424437 kernel: BTRFS: device fsid ca982954-e818-4158-83b7-102f75baa62c devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (182) Jan 20 02:42:34.424523 kernel: BTRFS info (device dm-0): first mount of filesystem ca982954-e818-4158-83b7-102f75baa62c Jan 20 02:42:34.424541 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 02:42:34.424554 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 02:42:34.424565 kernel: BTRFS info (device dm-0): enabling free space tree Jan 20 02:42:34.424616 kernel: loop: module loaded Jan 20 02:42:34.424644 kernel: loop0: detected capacity change from 0 to 100160 Jan 20 02:42:34.424655 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 02:42:34.424670 systemd[1]: Successfully made /usr/ read-only. Jan 20 02:42:34.424687 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 02:42:34.424704 systemd[1]: Detected virtualization kvm. Jan 20 02:42:34.424716 systemd[1]: Detected architecture x86-64. Jan 20 02:42:34.424732 systemd[1]: Running in initrd. Jan 20 02:42:34.424744 systemd[1]: No hostname configured, using default hostname. Jan 20 02:42:34.424758 systemd[1]: Hostname set to . Jan 20 02:42:34.424771 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 20 02:42:34.424784 systemd[1]: Queued start job for default target initrd.target. Jan 20 02:42:34.424797 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 02:42:34.424809 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 02:42:34.424825 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 02:42:34.424840 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 02:42:34.424855 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 02:42:34.424870 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 02:42:34.424882 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 02:42:34.424899 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 02:42:34.424913 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 02:42:34.424928 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 20 02:42:34.424939 systemd[1]: Reached target paths.target - Path Units. Jan 20 02:42:34.424951 systemd[1]: Reached target slices.target - Slice Units. Jan 20 02:42:34.424964 systemd[1]: Reached target swap.target - Swaps. Jan 20 02:42:34.424977 systemd[1]: Reached target timers.target - Timer Units. Jan 20 02:42:34.424994 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 02:42:34.425007 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 02:42:34.425022 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 02:42:34.425034 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 02:42:34.425046 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 20 02:42:34.425059 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 02:42:34.425071 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 02:42:34.425089 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 02:42:34.425102 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 02:42:34.425121 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 02:42:34.425134 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 02:42:34.425148 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 02:42:34.425160 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 02:42:34.425175 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 20 02:42:34.425192 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 02:42:34.425204 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 02:42:34.425217 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 02:42:34.425230 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 02:42:34.425245 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 02:42:34.425258 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 02:42:34.425271 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 02:42:34.425285 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 02:42:34.425362 systemd-journald[320]: Collecting audit messages is enabled. Jan 20 02:42:34.425398 systemd-journald[320]: Journal started Jan 20 02:42:34.425422 systemd-journald[320]: Runtime Journal (/run/log/journal/2e3a3ba1fcbb4afb8a6752b03e78d3b2) is 6M, max 48.2M, 42.2M free. Jan 20 02:42:34.458291 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 02:42:34.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:34.515769 kernel: audit: type=1130 audit(1768876954.467:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:34.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:34.521828 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 02:42:34.591172 kernel: audit: type=1130 audit(1768876954.521:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:34.564846 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 02:42:34.712859 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 02:42:35.350750 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 02:42:35.350810 kernel: Bridge firewalling registered Jan 20 02:42:35.197963 systemd-modules-load[321]: Inserted module 'br_netfilter' Jan 20 02:42:35.421778 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 02:42:35.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:35.519807 systemd-tmpfiles[338]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 20 02:42:35.562260 kernel: audit: type=1130 audit(1768876955.475:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:35.527049 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:42:35.711722 kernel: audit: type=1130 audit(1768876955.589:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:35.711816 kernel: audit: type=1130 audit(1768876955.644:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:35.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:35.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:35.627820 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 02:42:35.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:35.690895 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 02:42:35.801795 kernel: audit: type=1130 audit(1768876955.712:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:35.737995 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 02:42:35.869136 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 02:42:36.018798 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 02:42:36.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:36.150796 kernel: audit: type=1130 audit(1768876956.075:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:36.147132 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 02:42:36.320268 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 02:42:36.404195 kernel: audit: type=1130 audit(1768876956.334:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:36.404235 kernel: audit: type=1334 audit(1768876956.353:10): prog-id=6 op=LOAD Jan 20 02:42:36.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:36.353000 audit: BPF prog-id=6 op=LOAD Jan 20 02:42:36.365825 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 02:42:36.447977 dracut-cmdline[356]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=62ba7e41f0b11e3efb63e9a03ee9de0b370deb0ea547dd39e8d3060b03ecf9e8 Jan 20 02:42:37.037438 systemd-resolved[359]: Positive Trust Anchors: Jan 20 02:42:37.037544 systemd-resolved[359]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 02:42:37.037551 systemd-resolved[359]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 02:42:37.052280 systemd-resolved[359]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 02:42:37.337320 systemd-resolved[359]: Defaulting to hostname 'linux'. Jan 20 02:42:37.358703 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 02:42:37.444217 kernel: audit: type=1130 audit(1768876957.369:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:37.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:37.371032 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 02:42:37.812131 kernel: Loading iSCSI transport class v2.0-870. Jan 20 02:42:37.929811 kernel: iscsi: registered transport (tcp) Jan 20 02:42:38.092028 kernel: iscsi: registered transport (qla4xxx) Jan 20 02:42:38.092720 kernel: QLogic iSCSI HBA Driver Jan 20 02:42:38.287410 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 02:42:38.425079 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 02:42:38.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:38.455136 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 02:42:38.754388 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 02:42:38.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:38.766646 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 02:42:38.778058 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 02:42:39.089686 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 02:42:39.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:39.102000 audit: BPF prog-id=7 op=LOAD Jan 20 02:42:39.102000 audit: BPF prog-id=8 op=LOAD Jan 20 02:42:39.121067 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 02:42:39.408693 systemd-udevd[603]: Using default interface naming scheme 'v257'. Jan 20 02:42:39.560412 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 02:42:39.685021 kernel: kauditd_printk_skb: 5 callbacks suppressed Jan 20 02:42:39.685065 kernel: audit: type=1130 audit(1768876959.591:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:39.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:39.631431 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 02:42:39.760973 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 02:42:39.837756 kernel: audit: type=1130 audit(1768876959.767:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:39.837796 kernel: audit: type=1334 audit(1768876959.788:19): prog-id=9 op=LOAD Jan 20 02:42:39.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:39.788000 audit: BPF prog-id=9 op=LOAD Jan 20 02:42:39.791778 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 02:42:39.875174 dracut-pre-trigger[692]: rd.md=0: removing MD RAID activation Jan 20 02:42:40.074760 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 02:42:40.138049 kernel: audit: type=1130 audit(1768876960.083:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:40.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:40.114833 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 02:42:40.288864 systemd-networkd[701]: lo: Link UP Jan 20 02:42:40.288897 systemd-networkd[701]: lo: Gained carrier Jan 20 02:42:40.309815 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 02:42:40.390682 systemd[1]: Reached target network.target - Network. Jan 20 02:42:40.456934 kernel: audit: type=1130 audit(1768876960.389:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:40.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:40.911053 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 02:42:40.979754 kernel: audit: type=1130 audit(1768876960.920:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:40.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:41.001152 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 02:42:41.400938 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 02:42:41.556937 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 02:42:41.673390 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 02:42:41.783676 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 02:42:41.921971 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 02:42:41.973163 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 02:42:41.977987 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:42:42.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:42.042033 disk-uuid[773]: Primary Header is updated. Jan 20 02:42:42.042033 disk-uuid[773]: Secondary Entries is updated. Jan 20 02:42:42.042033 disk-uuid[773]: Secondary Header is updated. Jan 20 02:42:42.085201 kernel: audit: type=1131 audit(1768876962.005:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:42.042048 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 02:42:42.060234 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 02:42:42.140468 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 02:42:43.118562 kernel: AES CTR mode by8 optimization enabled Jan 20 02:42:43.332887 systemd-networkd[701]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 02:42:45.003346 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 20 02:42:45.003533 disk-uuid[774]: Warning: The kernel is still using the old partition table. Jan 20 02:42:45.003533 disk-uuid[774]: The new table will be used at the next reboot or after you Jan 20 02:42:45.003533 disk-uuid[774]: run partprobe(8) or kpartx(8) Jan 20 02:42:45.003533 disk-uuid[774]: The operation has completed successfully. Jan 20 02:42:43.332904 systemd-networkd[701]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 02:42:45.210129 kernel: audit: type=1130 audit(1768876965.132:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:45.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:43.334127 systemd-networkd[701]: eth0: Link UP Jan 20 02:42:45.318122 kernel: audit: type=1130 audit(1768876965.224:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:45.318169 kernel: audit: type=1130 audit(1768876965.225:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:45.318204 kernel: audit: type=1131 audit(1768876965.225:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:45.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:45.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:45.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:43.354949 systemd-networkd[701]: eth0: Gained carrier Jan 20 02:42:43.354970 systemd-networkd[701]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 02:42:43.456036 systemd-networkd[701]: eth0: DHCPv4 address 10.0.0.129/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 02:42:45.016533 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:42:45.141152 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 02:42:45.225366 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 02:42:45.225603 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 02:42:45.234566 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 02:42:45.342413 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 02:42:45.367469 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 02:42:45.430617 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 02:42:45.518326 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 02:42:45.679976 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 02:42:45.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:45.722585 kernel: audit: type=1130 audit(1768876965.698:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:45.737102 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (867) Jan 20 02:42:45.748199 kernel: BTRFS info (device vda6): first mount of filesystem dd813f25-deee-45c4-bcdc-1fa4787873d8 Jan 20 02:42:45.748272 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 02:42:45.779225 kernel: BTRFS info (device vda6): turning on async discard Jan 20 02:42:45.779308 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 02:42:45.827584 kernel: BTRFS info (device vda6): last unmount of filesystem dd813f25-deee-45c4-bcdc-1fa4787873d8 Jan 20 02:42:45.840305 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 02:42:45.922004 kernel: audit: type=1130 audit(1768876965.845:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:45.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:45.854751 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 02:42:45.977033 systemd-networkd[701]: eth0: Gained IPv6LL Jan 20 02:42:47.751975 ignition[888]: Ignition 2.22.0 Jan 20 02:42:47.752126 ignition[888]: Stage: fetch-offline Jan 20 02:42:47.752228 ignition[888]: no configs at "/usr/lib/ignition/base.d" Jan 20 02:42:47.752248 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:42:47.752721 ignition[888]: parsed url from cmdline: "" Jan 20 02:42:47.752728 ignition[888]: no config URL provided Jan 20 02:42:47.752736 ignition[888]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 02:42:47.752752 ignition[888]: no config at "/usr/lib/ignition/user.ign" Jan 20 02:42:47.752921 ignition[888]: op(1): [started] loading QEMU firmware config module Jan 20 02:42:47.752928 ignition[888]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 20 02:42:47.847876 ignition[888]: op(1): [finished] loading QEMU firmware config module Jan 20 02:42:48.213165 ignition[888]: parsing config with SHA512: 5806980ff83bdb00451efc4aa936b2398876914c7042512f269312b2f6554f0136f7836146b0240d3cfd7dcebc58d5f4233da71b1b4a4c8b845efb57882d5d06 Jan 20 02:42:49.095386 unknown[888]: fetched base config from "system" Jan 20 02:42:49.095943 unknown[888]: fetched user config from "qemu" Jan 20 02:42:49.132838 ignition[888]: fetch-offline: fetch-offline passed Jan 20 02:42:49.145269 ignition[888]: Ignition finished successfully Jan 20 02:42:49.170768 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 02:42:49.263347 kernel: audit: type=1130 audit(1768876969.211:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:49.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:49.227329 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 20 02:42:49.237622 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 02:42:50.293529 ignition[898]: Ignition 2.22.0 Jan 20 02:42:50.293628 ignition[898]: Stage: kargs Jan 20 02:42:50.322392 ignition[898]: no configs at "/usr/lib/ignition/base.d" Jan 20 02:42:50.325179 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:42:50.338914 ignition[898]: kargs: kargs passed Jan 20 02:42:50.339111 ignition[898]: Ignition finished successfully Jan 20 02:42:50.411456 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 02:42:50.506320 kernel: audit: type=1130 audit(1768876970.428:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:50.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:50.466356 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 02:42:50.966780 ignition[906]: Ignition 2.22.0 Jan 20 02:42:50.966840 ignition[906]: Stage: disks Jan 20 02:42:50.967298 ignition[906]: no configs at "/usr/lib/ignition/base.d" Jan 20 02:42:50.967314 ignition[906]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:42:51.002970 ignition[906]: disks: disks passed Jan 20 02:42:51.023755 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 02:42:51.086035 kernel: audit: type=1130 audit(1768876971.046:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:51.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:51.003192 ignition[906]: Ignition finished successfully Jan 20 02:42:51.084930 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 02:42:51.112328 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 02:42:51.144595 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 02:42:51.205110 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 02:42:51.211697 systemd[1]: Reached target basic.target - Basic System. Jan 20 02:42:51.296988 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 02:42:51.637198 systemd-fsck[916]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 20 02:42:51.673114 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 02:42:51.758345 kernel: audit: type=1130 audit(1768876971.708:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:51.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:51.717935 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 02:42:52.547032 kernel: EXT4-fs (vda9): mounted filesystem dbcb8eb1-a16c-4a1a-8ee4-d933bd0ee436 r/w with ordered data mode. Quota mode: none. Jan 20 02:42:52.554779 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 02:42:52.570279 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 02:42:52.612081 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 02:42:52.642933 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 02:42:52.643534 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 02:42:52.643599 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 02:42:52.783756 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (925) Jan 20 02:42:52.643635 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 02:42:52.810747 kernel: BTRFS info (device vda6): first mount of filesystem dd813f25-deee-45c4-bcdc-1fa4787873d8 Jan 20 02:42:52.810799 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 02:42:52.714854 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 02:42:52.744964 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 02:42:52.886600 kernel: BTRFS info (device vda6): turning on async discard Jan 20 02:42:52.886720 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 02:42:52.900209 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 02:42:53.306337 initrd-setup-root[949]: cut: /sysroot/etc/passwd: No such file or directory Jan 20 02:42:53.349898 initrd-setup-root[956]: cut: /sysroot/etc/group: No such file or directory Jan 20 02:42:53.403059 initrd-setup-root[963]: cut: /sysroot/etc/shadow: No such file or directory Jan 20 02:42:53.441173 initrd-setup-root[970]: cut: /sysroot/etc/gshadow: No such file or directory Jan 20 02:42:54.306938 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 02:42:54.366908 kernel: audit: type=1130 audit(1768876974.323:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:54.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:54.335960 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 02:42:54.405040 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 02:42:54.467978 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 02:42:54.508247 kernel: BTRFS info (device vda6): last unmount of filesystem dd813f25-deee-45c4-bcdc-1fa4787873d8 Jan 20 02:42:54.893904 ignition[1037]: INFO : Ignition 2.22.0 Jan 20 02:42:54.905005 ignition[1037]: INFO : Stage: mount Jan 20 02:42:54.905005 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 02:42:54.905005 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:42:54.964235 ignition[1037]: INFO : mount: mount passed Jan 20 02:42:54.964235 ignition[1037]: INFO : Ignition finished successfully Jan 20 02:42:55.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:55.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:55.005934 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 02:42:55.099013 kernel: audit: type=1130 audit(1768876975.004:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:55.099076 kernel: audit: type=1130 audit(1768876975.004:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:42:55.010575 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 02:42:55.060031 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 02:42:55.201222 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 02:42:55.284960 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1052) Jan 20 02:42:55.305340 kernel: BTRFS info (device vda6): first mount of filesystem dd813f25-deee-45c4-bcdc-1fa4787873d8 Jan 20 02:42:55.305420 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 02:42:55.366837 kernel: BTRFS info (device vda6): turning on async discard Jan 20 02:42:55.366978 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 02:42:55.399949 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 02:42:55.654926 ignition[1068]: INFO : Ignition 2.22.0 Jan 20 02:42:55.693758 ignition[1068]: INFO : Stage: files Jan 20 02:42:55.693758 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 02:42:55.693758 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:42:55.693758 ignition[1068]: DEBUG : files: compiled without relabeling support, skipping Jan 20 02:42:55.756291 ignition[1068]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 02:42:55.756291 ignition[1068]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 02:42:55.800889 ignition[1068]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 02:42:55.800889 ignition[1068]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 02:42:55.800889 ignition[1068]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 02:42:55.770952 unknown[1068]: wrote ssh authorized keys file for user: core Jan 20 02:42:55.889396 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 02:42:55.889396 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 20 02:42:56.253283 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 02:42:57.750068 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1728711169 wd_nsec: 1728710283 Jan 20 02:42:58.237829 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 02:42:58.237829 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 02:42:58.307279 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 02:42:58.307279 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 02:42:58.307279 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 02:42:58.307279 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 02:42:58.307279 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 02:42:58.307279 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 02:42:58.307279 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 02:42:58.307279 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 02:42:58.307279 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 02:42:58.307279 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 02:42:58.307279 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 02:42:58.307279 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 02:42:58.307279 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 20 02:42:59.318097 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 02:43:02.595053 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 02:43:02.595053 ignition[1068]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 02:43:02.638925 ignition[1068]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 02:43:02.652991 ignition[1068]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 02:43:02.652991 ignition[1068]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 02:43:02.652991 ignition[1068]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 20 02:43:02.652991 ignition[1068]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 02:43:02.652991 ignition[1068]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 02:43:02.652991 ignition[1068]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 20 02:43:02.652991 ignition[1068]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 20 02:43:02.843609 ignition[1068]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 02:43:02.881589 ignition[1068]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 02:43:02.881589 ignition[1068]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 20 02:43:02.881589 ignition[1068]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 20 02:43:02.881589 ignition[1068]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 02:43:02.881589 ignition[1068]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 02:43:02.881589 ignition[1068]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 02:43:02.881589 ignition[1068]: INFO : files: files passed Jan 20 02:43:02.881589 ignition[1068]: INFO : Ignition finished successfully Jan 20 02:43:02.898061 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 02:43:03.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:03.063375 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 02:43:03.109866 kernel: audit: type=1130 audit(1768876983.037:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:03.127073 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 02:43:03.167561 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 02:43:03.167962 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 02:43:03.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:03.214543 kernel: audit: type=1130 audit(1768876983.201:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:03.214616 kernel: audit: type=1131 audit(1768876983.201:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:03.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:03.236944 initrd-setup-root-after-ignition[1100]: grep: /sysroot/oem/oem-release: No such file or directory Jan 20 02:43:03.255830 initrd-setup-root-after-ignition[1103]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 02:43:03.255830 initrd-setup-root-after-ignition[1103]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 02:43:03.297192 initrd-setup-root-after-ignition[1107]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 02:43:03.315090 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 02:43:03.361365 kernel: audit: type=1130 audit(1768876983.323:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:03.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:03.326886 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 02:43:03.361436 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 02:43:03.599955 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 02:43:03.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:03.607621 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 02:43:03.688849 kernel: audit: type=1130 audit(1768876983.613:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:03.689288 kernel: audit: type=1131 audit(1768876983.613:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:03.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:03.620298 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 02:43:03.703263 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 02:43:03.738442 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 02:43:03.762906 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 02:43:04.157733 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 02:43:04.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:04.261796 kernel: audit: type=1130 audit(1768876984.220:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:04.290180 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 02:43:04.420243 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 02:43:04.604274 kernel: audit: type=1131 audit(1768876984.434:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:04.604316 kernel: audit: type=1131 audit(1768876984.436:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:04.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:04.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:04.420686 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 02:43:04.433725 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 02:43:04.433976 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 02:43:04.434154 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 02:43:04.434360 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 02:43:04.434880 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 02:43:04.435010 systemd[1]: Stopped target basic.target - Basic System. Jan 20 02:43:04.800304 kernel: audit: type=1131 audit(1768876984.690:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:04.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:04.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:04.435141 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 02:43:04.435279 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 02:43:04.435409 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 02:43:04.435598 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 20 02:43:04.435721 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 02:43:04.435887 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 02:43:04.436024 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 02:43:04.436162 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 02:43:04.436280 systemd[1]: Stopped target swap.target - Swaps. Jan 20 02:43:04.436374 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 02:43:04.436613 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 02:43:04.436932 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 02:43:04.437068 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 02:43:04.437158 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 02:43:04.442249 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 02:43:04.690413 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 02:43:04.690844 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 02:43:04.691238 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 02:43:04.691392 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 02:43:05.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:04.691684 systemd[1]: Stopped target paths.target - Path Units. Jan 20 02:43:05.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:04.691828 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 02:43:04.702421 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 02:43:04.918548 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 02:43:04.947843 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 02:43:04.953131 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 02:43:04.953305 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 02:43:05.062083 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 02:43:05.062399 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 02:43:05.123187 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 20 02:43:05.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:05.124630 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 20 02:43:05.181157 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 02:43:05.181373 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 02:43:05.256995 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 02:43:05.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:05.257203 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 02:43:05.298160 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 02:43:05.362420 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 02:43:05.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:05.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:05.362939 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 02:43:05.469905 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 02:43:05.500891 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 02:43:05.521091 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 02:43:05.550076 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 02:43:05.555151 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 02:43:05.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:05.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:05.614528 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 02:43:05.614729 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 02:43:05.736143 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 02:43:05.736341 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 02:43:05.805540 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 02:43:05.901451 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 02:43:05.910220 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 02:43:05.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:05.955320 ignition[1127]: INFO : Ignition 2.22.0 Jan 20 02:43:05.955320 ignition[1127]: INFO : Stage: umount Jan 20 02:43:05.955320 ignition[1127]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 02:43:05.955320 ignition[1127]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:43:05.955320 ignition[1127]: INFO : umount: umount passed Jan 20 02:43:05.955320 ignition[1127]: INFO : Ignition finished successfully Jan 20 02:43:05.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:06.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:05.943014 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 02:43:05.943226 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 02:43:06.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:06.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:06.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:06.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:05.993367 systemd[1]: Stopped target network.target - Network. Jan 20 02:43:06.038567 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 02:43:06.038707 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 02:43:06.040063 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 02:43:06.040128 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 02:43:06.094732 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 02:43:06.094900 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 02:43:06.113110 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 02:43:06.113227 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 02:43:06.136350 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 02:43:06.136470 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 02:43:06.160536 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 02:43:06.319635 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 02:43:06.337171 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 02:43:06.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:06.337354 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 02:43:06.413441 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 02:43:06.418903 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 02:43:06.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:06.449000 audit: BPF prog-id=9 op=UNLOAD Jan 20 02:43:06.466000 audit: BPF prog-id=6 op=UNLOAD Jan 20 02:43:06.449135 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 20 02:43:06.466998 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 02:43:06.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:06.467124 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 02:43:06.513795 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 02:43:06.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:06.538216 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 02:43:06.539145 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 02:43:06.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:06.594367 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 02:43:06.603636 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 02:43:06.669955 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 02:43:06.687381 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 02:43:06.787307 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 02:43:06.909015 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 02:43:06.909318 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 02:43:06.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:06.991973 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 02:43:06.992084 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 02:43:07.023705 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 02:43:07.034376 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 02:43:07.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:07.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:07.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:07.057210 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 02:43:07.057371 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 02:43:07.111923 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 02:43:07.112036 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 02:43:07.112210 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 02:43:07.112282 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 02:43:07.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:07.200899 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 02:43:07.289452 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 20 02:43:07.289819 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 02:43:07.327285 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 02:43:07.337694 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 02:43:07.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:07.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:07.486252 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 02:43:07.486401 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:43:07.548665 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 02:43:07.583168 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 02:43:07.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:07.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:07.616302 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 02:43:07.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:07.616545 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 02:43:07.642204 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 02:43:07.654272 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 02:43:07.764130 systemd[1]: Switching root. Jan 20 02:43:07.892541 systemd-journald[320]: Journal stopped Jan 20 02:43:15.675256 systemd-journald[320]: Received SIGTERM from PID 1 (systemd). Jan 20 02:43:15.675382 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 02:43:15.675409 kernel: SELinux: policy capability open_perms=1 Jan 20 02:43:15.675428 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 02:43:15.675450 kernel: SELinux: policy capability always_check_network=0 Jan 20 02:43:15.675526 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 02:43:15.675547 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 02:43:15.675564 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 02:43:15.675598 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 02:43:15.675614 kernel: SELinux: policy capability userspace_initial_context=0 Jan 20 02:43:15.675638 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 20 02:43:15.675663 kernel: audit: type=1403 audit(1768876989.034:81): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 20 02:43:15.675687 systemd[1]: Successfully loaded SELinux policy in 452.191ms. Jan 20 02:43:15.675713 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 70.683ms. Jan 20 02:43:15.675732 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 02:43:15.675750 systemd[1]: Detected virtualization kvm. Jan 20 02:43:15.675768 systemd[1]: Detected architecture x86-64. Jan 20 02:43:15.675786 systemd[1]: Detected first boot. Jan 20 02:43:15.675806 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 20 02:43:15.675859 kernel: audit: type=1334 audit(1768876989.543:82): prog-id=10 op=LOAD Jan 20 02:43:15.675878 kernel: audit: type=1334 audit(1768876989.546:83): prog-id=10 op=UNLOAD Jan 20 02:43:15.675895 kernel: audit: type=1334 audit(1768876989.550:84): prog-id=11 op=LOAD Jan 20 02:43:15.675912 kernel: audit: type=1334 audit(1768876989.550:85): prog-id=11 op=UNLOAD Jan 20 02:43:15.675936 zram_generator::config[1173]: No configuration found. Jan 20 02:43:15.675955 kernel: Guest personality initialized and is inactive Jan 20 02:43:15.675974 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 20 02:43:15.675992 kernel: Initialized host personality Jan 20 02:43:15.676009 kernel: NET: Registered PF_VSOCK protocol family Jan 20 02:43:15.676026 systemd[1]: Populated /etc with preset unit settings. Jan 20 02:43:15.676044 kernel: audit: type=1334 audit(1768876992.559:86): prog-id=12 op=LOAD Jan 20 02:43:15.676061 kernel: audit: type=1334 audit(1768876992.559:87): prog-id=3 op=UNLOAD Jan 20 02:43:15.676077 kernel: audit: type=1334 audit(1768876992.560:88): prog-id=13 op=LOAD Jan 20 02:43:15.676096 kernel: audit: type=1334 audit(1768876992.560:89): prog-id=14 op=LOAD Jan 20 02:43:15.676113 kernel: audit: type=1334 audit(1768876992.560:90): prog-id=4 op=UNLOAD Jan 20 02:43:15.676130 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 02:43:15.676148 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 02:43:15.676165 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 02:43:15.676191 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 02:43:15.676210 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 02:43:15.676230 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 02:43:15.676248 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 02:43:15.676266 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 02:43:15.676284 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 02:43:15.676302 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 02:43:15.676324 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 02:43:15.676345 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 02:43:15.676363 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 02:43:15.676382 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 02:43:15.676400 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 02:43:15.676418 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 02:43:15.676436 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 02:43:15.676453 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 02:43:15.676516 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 02:43:15.676536 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 02:43:15.676554 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 02:43:15.676571 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 02:43:15.676589 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 02:43:15.676607 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 02:43:15.676625 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 02:43:15.676646 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 02:43:15.676664 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 20 02:43:15.676682 systemd[1]: Reached target slices.target - Slice Units. Jan 20 02:43:15.676700 systemd[1]: Reached target swap.target - Swaps. Jan 20 02:43:15.676717 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 02:43:15.676735 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 02:43:15.676752 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 20 02:43:15.676772 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 02:43:15.676790 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 20 02:43:15.676809 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 02:43:15.677941 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 20 02:43:15.677966 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 20 02:43:15.677986 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 02:43:15.678006 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 02:43:15.678025 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 02:43:15.678050 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 02:43:15.678070 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 02:43:15.678088 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 02:43:15.678106 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:43:15.678126 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 02:43:15.678144 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 02:43:15.678166 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 02:43:15.678186 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 02:43:15.678204 systemd[1]: Reached target machines.target - Containers. Jan 20 02:43:15.678223 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 02:43:15.678241 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 02:43:15.678260 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 02:43:15.678278 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 02:43:15.678300 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 02:43:15.678319 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 02:43:15.678343 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 02:43:15.678361 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 02:43:15.678380 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 02:43:15.678399 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 02:43:15.678418 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 02:43:15.678444 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 02:43:15.678467 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 02:43:15.678543 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 20 02:43:15.678566 kernel: audit: type=1131 audit(1768876994.846:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:15.678585 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 02:43:15.678605 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 02:43:15.678630 kernel: audit: type=1131 audit(1768876994.945:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:15.678647 kernel: audit: type=1334 audit(1768876994.983:99): prog-id=14 op=UNLOAD Jan 20 02:43:15.678663 kernel: audit: type=1334 audit(1768876994.983:100): prog-id=13 op=UNLOAD Jan 20 02:43:15.678682 kernel: audit: type=1334 audit(1768876995.032:101): prog-id=15 op=LOAD Jan 20 02:43:15.678699 kernel: audit: type=1334 audit(1768876995.092:102): prog-id=16 op=LOAD Jan 20 02:43:15.678715 kernel: audit: type=1334 audit(1768876995.137:103): prog-id=17 op=LOAD Jan 20 02:43:15.678735 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 02:43:15.678755 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 02:43:15.678771 kernel: fuse: init (API version 7.41) Jan 20 02:43:15.678786 kernel: ACPI: bus type drm_connector registered Jan 20 02:43:15.678805 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 02:43:15.678884 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 02:43:15.678905 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 20 02:43:15.678924 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 02:43:15.678943 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:43:15.679011 systemd-journald[1259]: Collecting audit messages is enabled. Jan 20 02:43:15.679051 kernel: audit: type=1305 audit(1768876995.650:104): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 20 02:43:15.679074 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 02:43:15.679094 systemd-journald[1259]: Journal started Jan 20 02:43:15.679125 systemd-journald[1259]: Runtime Journal (/run/log/journal/2e3a3ba1fcbb4afb8a6752b03e78d3b2) is 6M, max 48.2M, 42.2M free. Jan 20 02:43:13.481000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 20 02:43:14.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:14.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:14.983000 audit: BPF prog-id=14 op=UNLOAD Jan 20 02:43:14.983000 audit: BPF prog-id=13 op=UNLOAD Jan 20 02:43:15.032000 audit: BPF prog-id=15 op=LOAD Jan 20 02:43:15.092000 audit: BPF prog-id=16 op=LOAD Jan 20 02:43:15.137000 audit: BPF prog-id=17 op=LOAD Jan 20 02:43:15.650000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 20 02:43:12.533658 systemd[1]: Queued start job for default target multi-user.target. Jan 20 02:43:12.561239 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 02:43:12.567001 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 02:43:12.569408 systemd[1]: systemd-journald.service: Consumed 2.104s CPU time. Jan 20 02:43:15.691911 kernel: audit: type=1300 audit(1768876995.650:104): arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff21bc82e0 a2=4000 a3=0 items=0 ppid=1 pid=1259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:15.650000 audit[1259]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff21bc82e0 a2=4000 a3=0 items=0 ppid=1 pid=1259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:15.650000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 20 02:43:15.780557 kernel: audit: type=1327 audit(1768876995.650:104): proctitle="/usr/lib/systemd/systemd-journald" Jan 20 02:43:15.801614 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 02:43:15.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:15.815367 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 02:43:15.835681 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 02:43:15.855190 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 02:43:15.878048 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 02:43:15.899689 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 02:43:15.930209 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 02:43:15.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:15.959051 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 02:43:15.985112 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 02:43:15.985691 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 02:43:15.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:16.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:16.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:16.014561 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 02:43:16.014947 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 02:43:16.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:16.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:16.037936 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 02:43:16.038246 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 02:43:16.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:16.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:16.066391 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 02:43:16.079173 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 02:43:16.106766 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 02:43:16.108342 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 02:43:16.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:16.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:16.138891 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 02:43:16.139386 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 02:43:16.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:16.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:16.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:16.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:16.180127 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 02:43:16.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:16.209783 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 02:43:16.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:16.255198 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 02:43:16.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:16.294455 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 20 02:43:16.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:16.384335 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 02:43:16.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:16.453451 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 02:43:16.475241 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 20 02:43:16.510170 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 02:43:16.556790 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 02:43:16.593608 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 02:43:16.593685 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 02:43:16.630413 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 20 02:43:16.638030 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 02:43:16.638286 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 02:43:16.672788 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 02:43:16.695121 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 02:43:16.735204 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 02:43:16.761304 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 02:43:16.781011 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 02:43:16.789910 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 02:43:16.832618 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 02:43:16.876966 systemd-journald[1259]: Time spent on flushing to /var/log/journal/2e3a3ba1fcbb4afb8a6752b03e78d3b2 is 168.107ms for 1141 entries. Jan 20 02:43:16.876966 systemd-journald[1259]: System Journal (/var/log/journal/2e3a3ba1fcbb4afb8a6752b03e78d3b2) is 8M, max 163.5M, 155.5M free. Jan 20 02:43:17.097419 systemd-journald[1259]: Received client request to flush runtime journal. Jan 20 02:43:17.097536 kernel: loop1: detected capacity change from 0 to 219144 Jan 20 02:43:17.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:16.874265 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 02:43:16.937622 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 02:43:16.962969 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 02:43:16.999250 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 02:43:17.040418 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 02:43:17.106325 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 20 02:43:17.154132 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 02:43:17.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:17.265673 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 02:43:17.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:17.327798 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 02:43:17.334729 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 20 02:43:17.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:17.420397 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 02:43:17.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:17.462000 audit: BPF prog-id=18 op=LOAD Jan 20 02:43:17.462000 audit: BPF prog-id=19 op=LOAD Jan 20 02:43:17.462000 audit: BPF prog-id=20 op=LOAD Jan 20 02:43:17.468957 kernel: loop2: detected capacity change from 0 to 111544 Jan 20 02:43:17.476566 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 20 02:43:17.492000 audit: BPF prog-id=21 op=LOAD Jan 20 02:43:17.499284 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 02:43:17.534740 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 02:43:17.561000 audit: BPF prog-id=22 op=LOAD Jan 20 02:43:17.577000 audit: BPF prog-id=23 op=LOAD Jan 20 02:43:17.577000 audit: BPF prog-id=24 op=LOAD Jan 20 02:43:17.588077 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 20 02:43:17.605000 audit: BPF prog-id=25 op=LOAD Jan 20 02:43:17.606000 audit: BPF prog-id=26 op=LOAD Jan 20 02:43:17.606000 audit: BPF prog-id=27 op=LOAD Jan 20 02:43:17.619717 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 02:43:17.709822 kernel: loop3: detected capacity change from 0 to 119256 Jan 20 02:43:17.749375 systemd-tmpfiles[1314]: ACLs are not supported, ignoring. Jan 20 02:43:17.749430 systemd-tmpfiles[1314]: ACLs are not supported, ignoring. Jan 20 02:43:17.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:17.797128 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 02:43:17.951932 kernel: loop4: detected capacity change from 0 to 219144 Jan 20 02:43:18.039320 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 02:43:18.041417 systemd-nsresourced[1317]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 20 02:43:18.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:18.075641 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 20 02:43:18.100579 kernel: loop5: detected capacity change from 0 to 111544 Jan 20 02:43:18.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:18.275595 kernel: loop6: detected capacity change from 0 to 119256 Jan 20 02:43:18.388556 (sd-merge)[1321]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 20 02:43:18.422150 (sd-merge)[1321]: Merged extensions into '/usr'. Jan 20 02:43:18.475748 systemd[1]: Reload requested from client PID 1295 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 02:43:18.476963 systemd[1]: Reloading... Jan 20 02:43:18.592111 systemd-oomd[1312]: No swap; memory pressure usage will be degraded Jan 20 02:43:18.662633 systemd-resolved[1313]: Positive Trust Anchors: Jan 20 02:43:18.662685 systemd-resolved[1313]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 02:43:18.662693 systemd-resolved[1313]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 02:43:18.662733 systemd-resolved[1313]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 02:43:18.700985 systemd-resolved[1313]: Defaulting to hostname 'linux'. Jan 20 02:43:18.918600 zram_generator::config[1368]: No configuration found. Jan 20 02:43:20.023545 systemd[1]: Reloading finished in 1542 ms. Jan 20 02:43:20.112304 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 20 02:43:20.133391 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 02:43:20.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:20.144603 kernel: kauditd_printk_skb: 38 callbacks suppressed Jan 20 02:43:20.144717 kernel: audit: type=1130 audit(1768877000.132:143): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:20.191068 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 02:43:20.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:20.220074 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 02:43:20.245259 kernel: audit: type=1130 audit(1768877000.188:144): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:20.245421 kernel: audit: type=1130 audit(1768877000.211:145): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:20.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:20.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:20.347790 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 02:43:20.357684 kernel: audit: type=1130 audit(1768877000.297:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:20.411072 systemd[1]: Starting ensure-sysext.service... Jan 20 02:43:20.445377 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 02:43:20.485000 audit: BPF prog-id=8 op=UNLOAD Jan 20 02:43:20.485000 audit: BPF prog-id=7 op=UNLOAD Jan 20 02:43:20.487000 audit: BPF prog-id=28 op=LOAD Jan 20 02:43:20.487000 audit: BPF prog-id=29 op=LOAD Jan 20 02:43:20.514560 kernel: audit: type=1334 audit(1768877000.485:147): prog-id=8 op=UNLOAD Jan 20 02:43:20.514624 kernel: audit: type=1334 audit(1768877000.485:148): prog-id=7 op=UNLOAD Jan 20 02:43:20.514648 kernel: audit: type=1334 audit(1768877000.487:149): prog-id=28 op=LOAD Jan 20 02:43:20.514699 kernel: audit: type=1334 audit(1768877000.487:150): prog-id=29 op=LOAD Jan 20 02:43:20.514320 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 02:43:20.601000 audit: BPF prog-id=30 op=LOAD Jan 20 02:43:20.645465 kernel: audit: type=1334 audit(1768877000.601:151): prog-id=30 op=LOAD Jan 20 02:43:20.647760 kernel: audit: type=1334 audit(1768877000.601:152): prog-id=18 op=UNLOAD Jan 20 02:43:20.601000 audit: BPF prog-id=18 op=UNLOAD Jan 20 02:43:20.601000 audit: BPF prog-id=31 op=LOAD Jan 20 02:43:20.601000 audit: BPF prog-id=32 op=LOAD Jan 20 02:43:20.601000 audit: BPF prog-id=19 op=UNLOAD Jan 20 02:43:20.601000 audit: BPF prog-id=20 op=UNLOAD Jan 20 02:43:20.601000 audit: BPF prog-id=33 op=LOAD Jan 20 02:43:20.610000 audit: BPF prog-id=21 op=UNLOAD Jan 20 02:43:20.639000 audit: BPF prog-id=34 op=LOAD Jan 20 02:43:20.639000 audit: BPF prog-id=15 op=UNLOAD Jan 20 02:43:20.639000 audit: BPF prog-id=35 op=LOAD Jan 20 02:43:20.639000 audit: BPF prog-id=36 op=LOAD Jan 20 02:43:20.639000 audit: BPF prog-id=16 op=UNLOAD Jan 20 02:43:20.639000 audit: BPF prog-id=17 op=UNLOAD Jan 20 02:43:20.657000 audit: BPF prog-id=37 op=LOAD Jan 20 02:43:20.657000 audit: BPF prog-id=22 op=UNLOAD Jan 20 02:43:20.657000 audit: BPF prog-id=38 op=LOAD Jan 20 02:43:20.657000 audit: BPF prog-id=39 op=LOAD Jan 20 02:43:20.657000 audit: BPF prog-id=23 op=UNLOAD Jan 20 02:43:20.657000 audit: BPF prog-id=24 op=UNLOAD Jan 20 02:43:20.657000 audit: BPF prog-id=40 op=LOAD Jan 20 02:43:20.657000 audit: BPF prog-id=25 op=UNLOAD Jan 20 02:43:20.657000 audit: BPF prog-id=41 op=LOAD Jan 20 02:43:20.657000 audit: BPF prog-id=42 op=LOAD Jan 20 02:43:20.657000 audit: BPF prog-id=26 op=UNLOAD Jan 20 02:43:20.657000 audit: BPF prog-id=27 op=UNLOAD Jan 20 02:43:20.727048 systemd[1]: Reload requested from client PID 1402 ('systemctl') (unit ensure-sysext.service)... Jan 20 02:43:20.727092 systemd[1]: Reloading... Jan 20 02:43:20.815995 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 02:43:20.816128 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 02:43:20.820031 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 02:43:20.827138 systemd-tmpfiles[1403]: ACLs are not supported, ignoring. Jan 20 02:43:20.827259 systemd-tmpfiles[1403]: ACLs are not supported, ignoring. Jan 20 02:43:20.853725 systemd-udevd[1404]: Using default interface naming scheme 'v257'. Jan 20 02:43:20.879181 systemd-tmpfiles[1403]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 02:43:20.879202 systemd-tmpfiles[1403]: Skipping /boot Jan 20 02:43:20.923554 systemd-tmpfiles[1403]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 02:43:20.923780 systemd-tmpfiles[1403]: Skipping /boot Jan 20 02:43:21.070572 zram_generator::config[1435]: No configuration found. Jan 20 02:43:21.953080 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 02:43:22.101999 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 20 02:43:22.177666 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 02:43:22.178315 kernel: ACPI: button: Power Button [PWRF] Jan 20 02:43:22.178360 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 02:43:23.098854 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 02:43:23.107993 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 02:43:23.128412 systemd[1]: Reloading finished in 2400 ms. Jan 20 02:43:23.211661 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 02:43:23.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:23.251654 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 02:43:23.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:23.326000 audit: BPF prog-id=43 op=LOAD Jan 20 02:43:23.326000 audit: BPF prog-id=30 op=UNLOAD Jan 20 02:43:23.331000 audit: BPF prog-id=44 op=LOAD Jan 20 02:43:23.331000 audit: BPF prog-id=45 op=LOAD Jan 20 02:43:23.331000 audit: BPF prog-id=31 op=UNLOAD Jan 20 02:43:23.331000 audit: BPF prog-id=32 op=UNLOAD Jan 20 02:43:23.331000 audit: BPF prog-id=46 op=LOAD Jan 20 02:43:23.331000 audit: BPF prog-id=47 op=LOAD Jan 20 02:43:23.331000 audit: BPF prog-id=28 op=UNLOAD Jan 20 02:43:23.331000 audit: BPF prog-id=29 op=UNLOAD Jan 20 02:43:23.395000 audit: BPF prog-id=48 op=LOAD Jan 20 02:43:23.401000 audit: BPF prog-id=33 op=UNLOAD Jan 20 02:43:23.413000 audit: BPF prog-id=49 op=LOAD Jan 20 02:43:23.414000 audit: BPF prog-id=40 op=UNLOAD Jan 20 02:43:23.414000 audit: BPF prog-id=50 op=LOAD Jan 20 02:43:23.414000 audit: BPF prog-id=51 op=LOAD Jan 20 02:43:23.414000 audit: BPF prog-id=41 op=UNLOAD Jan 20 02:43:23.414000 audit: BPF prog-id=42 op=UNLOAD Jan 20 02:43:23.432000 audit: BPF prog-id=52 op=LOAD Jan 20 02:43:23.432000 audit: BPF prog-id=37 op=UNLOAD Jan 20 02:43:23.432000 audit: BPF prog-id=53 op=LOAD Jan 20 02:43:23.432000 audit: BPF prog-id=54 op=LOAD Jan 20 02:43:23.432000 audit: BPF prog-id=38 op=UNLOAD Jan 20 02:43:23.432000 audit: BPF prog-id=39 op=UNLOAD Jan 20 02:43:23.452000 audit: BPF prog-id=55 op=LOAD Jan 20 02:43:23.452000 audit: BPF prog-id=34 op=UNLOAD Jan 20 02:43:23.467000 audit: BPF prog-id=56 op=LOAD Jan 20 02:43:23.468000 audit: BPF prog-id=57 op=LOAD Jan 20 02:43:23.468000 audit: BPF prog-id=35 op=UNLOAD Jan 20 02:43:23.472000 audit: BPF prog-id=36 op=UNLOAD Jan 20 02:43:23.699709 systemd[1]: Finished ensure-sysext.service. Jan 20 02:43:23.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:23.875856 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:43:23.887003 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 02:43:23.909916 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 02:43:23.923552 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 02:43:23.956747 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 02:43:23.993268 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 02:43:24.031750 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 02:43:24.082358 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 02:43:24.121390 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 02:43:24.121643 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 02:43:24.149421 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 02:43:24.188631 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 02:43:24.232605 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 02:43:24.280323 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 02:43:24.376000 audit: BPF prog-id=58 op=LOAD Jan 20 02:43:24.406000 audit: BPF prog-id=59 op=LOAD Jan 20 02:43:24.405312 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 02:43:24.413981 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 02:43:24.430190 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 02:43:24.501255 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 02:43:24.574215 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:43:24.599373 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 02:43:24.650000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 20 02:43:24.650000 audit[1551]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd22871350 a2=420 a3=0 items=0 ppid=1518 pid=1551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:24.650000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 02:43:24.657922 augenrules[1551]: No rules Jan 20 02:43:24.738350 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 02:43:24.765954 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 02:43:24.766363 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 02:43:24.783847 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 02:43:24.784288 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 02:43:24.805925 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 02:43:24.806379 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 02:43:24.844022 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 02:43:24.844682 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 02:43:24.864431 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 02:43:24.882389 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 02:43:24.973598 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 02:43:25.024460 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 02:43:25.037781 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 02:43:25.039648 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 02:43:25.039687 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 02:43:25.887540 systemd-networkd[1544]: lo: Link UP Jan 20 02:43:25.887558 systemd-networkd[1544]: lo: Gained carrier Jan 20 02:43:25.891230 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 02:43:25.902099 systemd-networkd[1544]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 02:43:25.902107 systemd-networkd[1544]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 02:43:25.910811 systemd-networkd[1544]: eth0: Link UP Jan 20 02:43:25.911820 systemd-networkd[1544]: eth0: Gained carrier Jan 20 02:43:25.911851 systemd-networkd[1544]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 02:43:26.114761 systemd-networkd[1544]: eth0: DHCPv4 address 10.0.0.129/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 02:43:26.121691 systemd-timesyncd[1546]: Network configuration changed, trying to establish connection. Jan 20 02:43:26.132959 systemd-timesyncd[1546]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 20 02:43:26.134612 systemd-timesyncd[1546]: Initial clock synchronization to Tue 2026-01-20 02:43:26.110843 UTC. Jan 20 02:43:26.322652 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 02:43:26.345186 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:43:26.374318 systemd[1]: Reached target network.target - Network. Jan 20 02:43:26.388716 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 02:43:26.416050 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 20 02:43:26.448868 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 02:43:26.619026 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 20 02:43:27.503371 ldconfig[1531]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 02:43:27.538175 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 02:43:27.581888 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 02:43:27.773142 systemd-networkd[1544]: eth0: Gained IPv6LL Jan 20 02:43:27.788580 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 02:43:27.815196 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 02:43:27.844187 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 02:43:27.860935 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 02:43:27.880705 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 02:43:27.909908 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 02:43:27.921603 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 20 02:43:27.943169 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 02:43:27.962226 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 02:43:27.977401 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 20 02:43:27.990422 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 20 02:43:27.997314 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 02:43:28.009273 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 02:43:28.009725 systemd[1]: Reached target paths.target - Path Units. Jan 20 02:43:28.017458 systemd[1]: Reached target timers.target - Timer Units. Jan 20 02:43:28.034207 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 02:43:28.055924 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 02:43:28.064322 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 20 02:43:28.084685 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 20 02:43:28.094003 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 20 02:43:28.145675 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 02:43:28.168339 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 20 02:43:28.186340 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 02:43:28.227091 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 02:43:28.237725 systemd[1]: Reached target basic.target - Basic System. Jan 20 02:43:28.245348 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 02:43:28.245384 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 02:43:28.267788 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 02:43:28.287415 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 02:43:28.347999 kernel: kvm_amd: TSC scaling supported Jan 20 02:43:28.352697 kernel: kvm_amd: Nested Virtualization enabled Jan 20 02:43:28.352997 kernel: kvm_amd: Nested Paging enabled Jan 20 02:43:28.353022 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 20 02:43:28.353047 kernel: kvm_amd: PMU virtualization is disabled Jan 20 02:43:28.369110 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 02:43:28.404183 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 02:43:28.438457 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 02:43:28.467113 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 02:43:28.491353 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 02:43:28.515543 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 20 02:43:28.553157 jq[1590]: false Jan 20 02:43:28.563306 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:43:28.685415 extend-filesystems[1591]: Found /dev/vda6 Jan 20 02:43:28.852259 extend-filesystems[1591]: Found /dev/vda9 Jan 20 02:43:28.862037 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 02:43:28.903424 extend-filesystems[1591]: Checking size of /dev/vda9 Jan 20 02:43:28.943307 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 02:43:29.044073 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Refreshing passwd entry cache Jan 20 02:43:29.043557 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 02:43:29.036018 oslogin_cache_refresh[1592]: Refreshing passwd entry cache Jan 20 02:43:29.139394 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Failure getting users, quitting Jan 20 02:43:29.139394 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 02:43:29.139394 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Refreshing group entry cache Jan 20 02:43:29.137453 oslogin_cache_refresh[1592]: Failure getting users, quitting Jan 20 02:43:29.137559 oslogin_cache_refresh[1592]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 02:43:29.137752 oslogin_cache_refresh[1592]: Refreshing group entry cache Jan 20 02:43:29.228260 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Failure getting groups, quitting Jan 20 02:43:29.228260 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 02:43:29.227272 oslogin_cache_refresh[1592]: Failure getting groups, quitting Jan 20 02:43:29.227317 oslogin_cache_refresh[1592]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 02:43:29.255223 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 02:43:29.329604 extend-filesystems[1591]: Resized partition /dev/vda9 Jan 20 02:43:29.350805 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 02:43:29.429393 extend-filesystems[1608]: resize2fs 1.47.3 (8-Jul-2025) Jan 20 02:43:29.640060 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 20 02:43:29.640180 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 20 02:43:29.506263 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 02:43:29.587549 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 02:43:29.709239 extend-filesystems[1608]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 02:43:29.709239 extend-filesystems[1608]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 20 02:43:29.709239 extend-filesystems[1608]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 20 02:43:29.609599 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 02:43:29.765221 extend-filesystems[1591]: Resized filesystem in /dev/vda9 Jan 20 02:43:29.655078 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 02:43:29.813720 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 02:43:29.950326 jq[1621]: true Jan 20 02:43:30.034973 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 02:43:30.061656 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 02:43:30.066634 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 02:43:30.067296 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 02:43:30.068674 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 02:43:30.120807 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 20 02:43:30.152300 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 20 02:43:30.210609 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 02:43:30.212359 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 02:43:30.236730 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 02:43:30.289819 update_engine[1617]: I20260120 02:43:30.284564 1617 main.cc:92] Flatcar Update Engine starting Jan 20 02:43:30.384638 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 02:43:30.397571 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 02:43:30.988008 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 02:43:30.991463 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 02:43:31.903558 jq[1639]: true Jan 20 02:43:31.925115 tar[1637]: linux-amd64/LICENSE Jan 20 02:43:31.925115 tar[1637]: linux-amd64/helm Jan 20 02:43:32.632912 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 02:43:33.437124 sshd_keygen[1623]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 02:43:33.690409 systemd-logind[1612]: Watching system buttons on /dev/input/event2 (Power Button) Jan 20 02:43:33.692051 systemd-logind[1612]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 02:43:33.697421 systemd-logind[1612]: New seat seat0. Jan 20 02:43:33.720538 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 02:43:33.767080 dbus-daemon[1588]: [system] SELinux support is enabled Jan 20 02:43:33.813045 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 02:43:33.880309 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 02:43:33.880352 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 02:43:33.919943 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 02:43:33.920041 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 02:43:33.977551 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 02:43:33.981154 dbus-daemon[1588]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 20 02:43:34.042236 update_engine[1617]: I20260120 02:43:34.028882 1617 update_check_scheduler.cc:74] Next update check in 11m50s Jan 20 02:43:34.083433 systemd[1]: Started update-engine.service - Update Engine. Jan 20 02:43:34.118435 bash[1685]: Updated "/home/core/.ssh/authorized_keys" Jan 20 02:43:34.156282 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 02:43:34.216431 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 02:43:34.345838 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 02:43:34.485100 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 02:43:34.493338 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 02:43:34.495226 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 02:43:34.585307 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 02:43:34.659915 containerd[1640]: time="2026-01-20T02:43:34Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 20 02:43:34.663250 containerd[1640]: time="2026-01-20T02:43:34.660627694Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 20 02:43:34.884009 containerd[1640]: time="2026-01-20T02:43:34.875291918Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.793µs" Jan 20 02:43:34.884009 containerd[1640]: time="2026-01-20T02:43:34.877934105Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 20 02:43:34.884009 containerd[1640]: time="2026-01-20T02:43:34.878092708Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 20 02:43:34.884009 containerd[1640]: time="2026-01-20T02:43:34.878238318Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 20 02:43:34.902654 containerd[1640]: time="2026-01-20T02:43:34.894172835Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 20 02:43:34.902654 containerd[1640]: time="2026-01-20T02:43:34.896364866Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 02:43:34.902654 containerd[1640]: time="2026-01-20T02:43:34.902729982Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 02:43:34.902654 containerd[1640]: time="2026-01-20T02:43:34.902830035Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 02:43:34.933006 containerd[1640]: time="2026-01-20T02:43:34.910205132Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 02:43:34.933006 containerd[1640]: time="2026-01-20T02:43:34.910464450Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 02:43:34.933006 containerd[1640]: time="2026-01-20T02:43:34.917946729Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 02:43:34.933006 containerd[1640]: time="2026-01-20T02:43:34.917977710Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 02:43:34.951198 containerd[1640]: time="2026-01-20T02:43:34.950956459Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 02:43:34.951198 containerd[1640]: time="2026-01-20T02:43:34.951085172Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 20 02:43:34.954275 containerd[1640]: time="2026-01-20T02:43:34.953325033Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 20 02:43:34.954275 containerd[1640]: time="2026-01-20T02:43:34.953851445Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 02:43:34.954275 containerd[1640]: time="2026-01-20T02:43:34.953905160Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 02:43:34.954275 containerd[1640]: time="2026-01-20T02:43:34.953921758Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 20 02:43:34.954275 containerd[1640]: time="2026-01-20T02:43:34.954008618Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 20 02:43:34.954436 containerd[1640]: time="2026-01-20T02:43:34.954293621Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 20 02:43:34.954436 containerd[1640]: time="2026-01-20T02:43:34.954384485Z" level=info msg="metadata content store policy set" policy=shared Jan 20 02:43:35.206443 containerd[1640]: time="2026-01-20T02:43:35.189909148Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 20 02:43:35.206443 containerd[1640]: time="2026-01-20T02:43:35.201968107Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 02:43:35.227605 containerd[1640]: time="2026-01-20T02:43:35.224084218Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 02:43:35.227605 containerd[1640]: time="2026-01-20T02:43:35.224206031Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 20 02:43:35.227605 containerd[1640]: time="2026-01-20T02:43:35.224238765Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 20 02:43:35.227605 containerd[1640]: time="2026-01-20T02:43:35.224268268Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 20 02:43:35.279761 containerd[1640]: time="2026-01-20T02:43:35.279633960Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 20 02:43:35.279761 containerd[1640]: time="2026-01-20T02:43:35.279754302Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 20 02:43:35.279761 containerd[1640]: time="2026-01-20T02:43:35.279781691Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 20 02:43:35.279761 containerd[1640]: time="2026-01-20T02:43:35.279799791Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 20 02:43:35.279761 containerd[1640]: time="2026-01-20T02:43:35.279819302Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 20 02:43:35.280790 containerd[1640]: time="2026-01-20T02:43:35.280035536Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 20 02:43:35.280790 containerd[1640]: time="2026-01-20T02:43:35.280057000Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 20 02:43:35.280790 containerd[1640]: time="2026-01-20T02:43:35.280083579Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 20 02:43:35.282612 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 02:43:35.321115 containerd[1640]: time="2026-01-20T02:43:35.315148909Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 20 02:43:35.321115 containerd[1640]: time="2026-01-20T02:43:35.321916334Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 20 02:43:35.321115 containerd[1640]: time="2026-01-20T02:43:35.322049559Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 20 02:43:35.321115 containerd[1640]: time="2026-01-20T02:43:35.322259406Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 20 02:43:35.321115 containerd[1640]: time="2026-01-20T02:43:35.322287376Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 20 02:43:35.321115 containerd[1640]: time="2026-01-20T02:43:35.322302833Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 20 02:43:35.321115 containerd[1640]: time="2026-01-20T02:43:35.322325168Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 20 02:43:35.321115 containerd[1640]: time="2026-01-20T02:43:35.322338461Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 20 02:43:35.321115 containerd[1640]: time="2026-01-20T02:43:35.322352236Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 20 02:43:35.321115 containerd[1640]: time="2026-01-20T02:43:35.322366291Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 20 02:43:35.321115 containerd[1640]: time="2026-01-20T02:43:35.322381527Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 20 02:43:35.321115 containerd[1640]: time="2026-01-20T02:43:35.322420030Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 20 02:43:35.323904 containerd[1640]: time="2026-01-20T02:43:35.322560752Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 20 02:43:35.323904 containerd[1640]: time="2026-01-20T02:43:35.322583667Z" level=info msg="Start snapshots syncer" Jan 20 02:43:35.323904 containerd[1640]: time="2026-01-20T02:43:35.322673135Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 20 02:43:35.340420 containerd[1640]: time="2026-01-20T02:43:35.328975045Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 20 02:43:35.340420 containerd[1640]: time="2026-01-20T02:43:35.329118520Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 20 02:43:35.353655 containerd[1640]: time="2026-01-20T02:43:35.353090700Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 20 02:43:35.362221 containerd[1640]: time="2026-01-20T02:43:35.361838870Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 20 02:43:35.362221 containerd[1640]: time="2026-01-20T02:43:35.361906112Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 20 02:43:35.362221 containerd[1640]: time="2026-01-20T02:43:35.361937827Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 20 02:43:35.362221 containerd[1640]: time="2026-01-20T02:43:35.361953544Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 20 02:43:35.362221 containerd[1640]: time="2026-01-20T02:43:35.361974997Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 20 02:43:35.362221 containerd[1640]: time="2026-01-20T02:43:35.361989813Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 20 02:43:35.362221 containerd[1640]: time="2026-01-20T02:43:35.362005100Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 20 02:43:35.362221 containerd[1640]: time="2026-01-20T02:43:35.362034392Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 20 02:43:35.362221 containerd[1640]: time="2026-01-20T02:43:35.362052912Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 20 02:43:35.362221 containerd[1640]: time="2026-01-20T02:43:35.362105399Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 02:43:35.362221 containerd[1640]: time="2026-01-20T02:43:35.362126742Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 02:43:35.362221 containerd[1640]: time="2026-01-20T02:43:35.362139856Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 02:43:35.362221 containerd[1640]: time="2026-01-20T02:43:35.362162531Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 02:43:35.362221 containerd[1640]: time="2026-01-20T02:43:35.362173443Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 20 02:43:35.420021 containerd[1640]: time="2026-01-20T02:43:35.362226520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 20 02:43:35.420021 containerd[1640]: time="2026-01-20T02:43:35.362243268Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 20 02:43:35.420021 containerd[1640]: time="2026-01-20T02:43:35.362273360Z" level=info msg="runtime interface created" Jan 20 02:43:35.420021 containerd[1640]: time="2026-01-20T02:43:35.362282530Z" level=info msg="created NRI interface" Jan 20 02:43:35.420021 containerd[1640]: time="2026-01-20T02:43:35.362293472Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 20 02:43:35.420021 containerd[1640]: time="2026-01-20T02:43:35.362315857Z" level=info msg="Connect containerd service" Jan 20 02:43:35.420021 containerd[1640]: time="2026-01-20T02:43:35.362347591Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 02:43:35.438818 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 02:43:35.533753 containerd[1640]: time="2026-01-20T02:43:35.441313431Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 02:43:35.653605 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 02:43:35.701053 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 02:43:36.643857 locksmithd[1689]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 02:43:38.009135 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 02:43:38.104029 systemd[1]: Started sshd@0-10.0.0.129:22-10.0.0.1:60404.service - OpenSSH per-connection server daemon (10.0.0.1:60404). Jan 20 02:43:38.382871 containerd[1640]: time="2026-01-20T02:43:38.382820852Z" level=info msg="Start subscribing containerd event" Jan 20 02:43:38.396938 containerd[1640]: time="2026-01-20T02:43:38.394075794Z" level=info msg="Start recovering state" Jan 20 02:43:38.396938 containerd[1640]: time="2026-01-20T02:43:38.394268349Z" level=info msg="Start event monitor" Jan 20 02:43:38.396938 containerd[1640]: time="2026-01-20T02:43:38.394287232Z" level=info msg="Start cni network conf syncer for default" Jan 20 02:43:38.396938 containerd[1640]: time="2026-01-20T02:43:38.394302350Z" level=info msg="Start streaming server" Jan 20 02:43:38.396938 containerd[1640]: time="2026-01-20T02:43:38.394314206Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 20 02:43:38.396938 containerd[1640]: time="2026-01-20T02:43:38.394324208Z" level=info msg="runtime interface starting up..." Jan 20 02:43:38.396938 containerd[1640]: time="2026-01-20T02:43:38.394332999Z" level=info msg="starting plugins..." Jan 20 02:43:38.396938 containerd[1640]: time="2026-01-20T02:43:38.394399701Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 20 02:43:38.401545 containerd[1640]: time="2026-01-20T02:43:38.397775294Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 02:43:38.401545 containerd[1640]: time="2026-01-20T02:43:38.399372815Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 02:43:38.420201 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 02:43:38.427908 containerd[1640]: time="2026-01-20T02:43:38.420418145Z" level=info msg="containerd successfully booted in 3.766449s" Jan 20 02:43:39.897282 tar[1637]: linux-amd64/README.md Jan 20 02:43:40.050865 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 60404 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:43:40.200167 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:43:40.548300 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 02:43:40.807568 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 02:43:41.112066 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 02:43:41.214044 systemd-logind[1612]: New session 1 of user core. Jan 20 02:43:42.916999 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 02:43:42.973130 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 02:43:43.086594 (systemd)[1731]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 20 02:43:43.091852 kernel: EDAC MC: Ver: 3.0.0 Jan 20 02:43:43.136050 systemd-logind[1612]: New session c1 of user core. Jan 20 02:43:44.097095 systemd[1731]: Queued start job for default target default.target. Jan 20 02:43:44.123684 systemd[1731]: Created slice app.slice - User Application Slice. Jan 20 02:43:44.123736 systemd[1731]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 20 02:43:44.123760 systemd[1731]: Reached target paths.target - Paths. Jan 20 02:43:44.124060 systemd[1731]: Reached target timers.target - Timers. Jan 20 02:43:44.142180 systemd[1731]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 02:43:44.146718 systemd[1731]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 20 02:43:44.231743 systemd[1731]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 02:43:44.231882 systemd[1731]: Reached target sockets.target - Sockets. Jan 20 02:43:44.253942 systemd[1731]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 20 02:43:44.254139 systemd[1731]: Reached target basic.target - Basic System. Jan 20 02:43:44.254292 systemd[1731]: Reached target default.target - Main User Target. Jan 20 02:43:44.254353 systemd[1731]: Startup finished in 1.031s. Jan 20 02:43:44.255410 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 02:43:44.278838 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 02:43:44.387986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:43:44.408786 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 02:43:44.442068 systemd[1]: Started sshd@1-10.0.0.129:22-10.0.0.1:60430.service - OpenSSH per-connection server daemon (10.0.0.1:60430). Jan 20 02:43:44.464359 systemd[1]: Startup finished in 14.873s (kernel) + 36.757s (initrd) + 35.865s (userspace) = 1min 27.495s. Jan 20 02:43:44.504731 (kubelet)[1750]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:43:44.840839 sshd[1752]: Accepted publickey for core from 10.0.0.1 port 60430 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:43:44.846154 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:43:44.870054 systemd-logind[1612]: New session 2 of user core. Jan 20 02:43:44.880338 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 20 02:43:44.945049 sshd[1755]: Connection closed by 10.0.0.1 port 60430 Jan 20 02:43:44.943830 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Jan 20 02:43:44.968377 systemd[1]: sshd@1-10.0.0.129:22-10.0.0.1:60430.service: Deactivated successfully. Jan 20 02:43:44.973199 systemd[1]: session-2.scope: Deactivated successfully. Jan 20 02:43:44.999801 systemd-logind[1612]: Session 2 logged out. Waiting for processes to exit. Jan 20 02:43:45.028865 systemd[1]: Started sshd@2-10.0.0.129:22-10.0.0.1:55972.service - OpenSSH per-connection server daemon (10.0.0.1:55972). Jan 20 02:43:45.031200 systemd-logind[1612]: Removed session 2. Jan 20 02:43:45.284669 sshd[1761]: Accepted publickey for core from 10.0.0.1 port 55972 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:43:45.293334 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:43:45.345043 systemd-logind[1612]: New session 3 of user core. Jan 20 02:43:45.372872 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 02:43:45.441144 sshd[1769]: Connection closed by 10.0.0.1 port 55972 Jan 20 02:43:45.440747 sshd-session[1761]: pam_unix(sshd:session): session closed for user core Jan 20 02:43:45.470150 systemd[1]: sshd@2-10.0.0.129:22-10.0.0.1:55972.service: Deactivated successfully. Jan 20 02:43:45.496273 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 02:43:45.502937 systemd-logind[1612]: Session 3 logged out. Waiting for processes to exit. Jan 20 02:43:45.519165 systemd[1]: Started sshd@3-10.0.0.129:22-10.0.0.1:55978.service - OpenSSH per-connection server daemon (10.0.0.1:55978). Jan 20 02:43:45.531112 systemd-logind[1612]: Removed session 3. Jan 20 02:43:45.743632 sshd[1775]: Accepted publickey for core from 10.0.0.1 port 55978 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:43:45.741331 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:43:45.777579 systemd-logind[1612]: New session 4 of user core. Jan 20 02:43:45.798046 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 02:43:45.878540 sshd[1778]: Connection closed by 10.0.0.1 port 55978 Jan 20 02:43:45.879356 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Jan 20 02:43:45.901856 systemd[1]: sshd@3-10.0.0.129:22-10.0.0.1:55978.service: Deactivated successfully. Jan 20 02:43:45.904390 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 02:43:45.915908 systemd-logind[1612]: Session 4 logged out. Waiting for processes to exit. Jan 20 02:43:45.922290 systemd[1]: Started sshd@4-10.0.0.129:22-10.0.0.1:55982.service - OpenSSH per-connection server daemon (10.0.0.1:55982). Jan 20 02:43:45.926714 systemd-logind[1612]: Removed session 4. Jan 20 02:43:46.075233 sshd[1784]: Accepted publickey for core from 10.0.0.1 port 55982 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:43:46.080202 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:43:46.116059 systemd-logind[1612]: New session 5 of user core. Jan 20 02:43:46.135268 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 02:43:46.283663 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 02:43:46.284173 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 02:43:46.336611 sudo[1789]: pam_unix(sudo:session): session closed for user root Jan 20 02:43:46.367898 sshd[1788]: Connection closed by 10.0.0.1 port 55982 Jan 20 02:43:46.361774 sshd-session[1784]: pam_unix(sshd:session): session closed for user core Jan 20 02:43:46.400744 systemd[1]: sshd@4-10.0.0.129:22-10.0.0.1:55982.service: Deactivated successfully. Jan 20 02:43:46.431096 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 02:43:46.438970 systemd-logind[1612]: Session 5 logged out. Waiting for processes to exit. Jan 20 02:43:46.441838 systemd[1]: Started sshd@5-10.0.0.129:22-10.0.0.1:56014.service - OpenSSH per-connection server daemon (10.0.0.1:56014). Jan 20 02:43:46.447970 systemd-logind[1612]: Removed session 5. Jan 20 02:43:46.726243 sshd[1795]: Accepted publickey for core from 10.0.0.1 port 56014 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:43:46.740533 sshd-session[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:43:46.780737 systemd-logind[1612]: New session 6 of user core. Jan 20 02:43:46.804593 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 02:43:46.894774 kubelet[1750]: E0120 02:43:46.894412 1750 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:43:46.903277 sudo[1801]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 02:43:46.903797 sudo[1801]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 02:43:46.917723 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:43:46.918803 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:43:46.924988 systemd[1]: kubelet.service: Consumed 1.937s CPU time, 259.1M memory peak. Jan 20 02:43:46.962260 sudo[1801]: pam_unix(sudo:session): session closed for user root Jan 20 02:43:46.990071 sudo[1800]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 20 02:43:46.990622 sudo[1800]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 02:43:47.053119 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 02:43:47.244000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 20 02:43:47.256379 augenrules[1824]: No rules Jan 20 02:43:47.263033 kernel: kauditd_printk_skb: 62 callbacks suppressed Jan 20 02:43:47.263114 kernel: audit: type=1305 audit(1768877027.244:213): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 20 02:43:47.267391 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 02:43:47.271730 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 02:43:47.286771 kernel: audit: type=1300 audit(1768877027.244:213): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffead7fcf0 a2=420 a3=0 items=0 ppid=1805 pid=1824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:47.244000 audit[1824]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffead7fcf0 a2=420 a3=0 items=0 ppid=1805 pid=1824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:47.285600 sudo[1800]: pam_unix(sudo:session): session closed for user root Jan 20 02:43:47.244000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 02:43:47.360631 kernel: audit: type=1327 audit(1768877027.244:213): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 02:43:47.360776 kernel: audit: type=1130 audit(1768877027.274:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:47.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:47.360919 sshd[1799]: Connection closed by 10.0.0.1 port 56014 Jan 20 02:43:47.357634 sshd-session[1795]: pam_unix(sshd:session): session closed for user core Jan 20 02:43:47.390850 kernel: audit: type=1131 audit(1768877027.274:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:47.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:47.282000 audit[1800]: USER_END pid=1800 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 02:43:47.433851 kernel: audit: type=1106 audit(1768877027.282:216): pid=1800 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 02:43:47.282000 audit[1800]: CRED_DISP pid=1800 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 02:43:47.460306 systemd[1]: sshd@5-10.0.0.129:22-10.0.0.1:56014.service: Deactivated successfully. Jan 20 02:43:47.470748 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 02:43:47.472372 systemd-logind[1612]: Session 6 logged out. Waiting for processes to exit. Jan 20 02:43:47.487730 kernel: audit: type=1104 audit(1768877027.282:217): pid=1800 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 02:43:47.487821 kernel: audit: type=1106 audit(1768877027.358:218): pid=1795 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:43:47.358000 audit[1795]: USER_END pid=1795 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:43:47.477447 systemd-logind[1612]: Removed session 6. Jan 20 02:43:47.500291 systemd[1]: Started sshd@6-10.0.0.129:22-10.0.0.1:56020.service - OpenSSH per-connection server daemon (10.0.0.1:56020). Jan 20 02:43:47.538781 kernel: audit: type=1104 audit(1768877027.366:219): pid=1795 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:43:47.366000 audit[1795]: CRED_DISP pid=1795 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:43:47.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.129:22-10.0.0.1:56014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:47.582751 kernel: audit: type=1131 audit(1768877027.466:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.129:22-10.0.0.1:56014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:47.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.129:22-10.0.0.1:56020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:47.676000 audit[1834]: USER_ACCT pid=1834 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:43:47.692572 sshd[1834]: Accepted publickey for core from 10.0.0.1 port 56020 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:43:47.696000 audit[1834]: CRED_ACQ pid=1834 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:43:47.696000 audit[1834]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc6a88bf0 a2=3 a3=0 items=0 ppid=1 pid=1834 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:47.696000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:43:47.697802 sshd-session[1834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:43:47.718023 systemd-logind[1612]: New session 7 of user core. Jan 20 02:43:47.744965 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 02:43:47.765000 audit[1834]: USER_START pid=1834 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:43:47.773000 audit[1837]: CRED_ACQ pid=1837 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:43:47.807000 audit[1838]: USER_ACCT pid=1838 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 02:43:47.811112 sudo[1838]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 02:43:47.811000 audit[1838]: CRED_REFR pid=1838 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 02:43:47.813027 sudo[1838]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 02:43:47.833000 audit[1838]: USER_START pid=1838 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 02:43:49.006212 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 02:43:49.033377 (dockerd)[1859]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 02:43:50.093884 dockerd[1859]: time="2026-01-20T02:43:50.093425655Z" level=info msg="Starting up" Jan 20 02:43:50.102555 dockerd[1859]: time="2026-01-20T02:43:50.100680083Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 20 02:43:50.182322 dockerd[1859]: time="2026-01-20T02:43:50.181048249Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 20 02:43:50.386238 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3284901023-merged.mount: Deactivated successfully. Jan 20 02:43:50.659402 dockerd[1859]: time="2026-01-20T02:43:50.659123004Z" level=info msg="Loading containers: start." Jan 20 02:43:50.753266 kernel: Initializing XFRM netlink socket Jan 20 02:43:51.412000 audit[1912]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1912 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:43:51.412000 audit[1912]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc5a03fc90 a2=0 a3=0 items=0 ppid=1859 pid=1912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:51.412000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 20 02:43:51.473000 audit[1914]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1914 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:43:51.473000 audit[1914]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff43a9cec0 a2=0 a3=0 items=0 ppid=1859 pid=1914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:51.473000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 20 02:43:51.515000 audit[1916]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1916 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:43:51.515000 audit[1916]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd435abed0 a2=0 a3=0 items=0 ppid=1859 pid=1916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:51.515000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 20 02:43:51.559000 audit[1918]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1918 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:43:51.559000 audit[1918]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffffcd92600 a2=0 a3=0 items=0 ppid=1859 pid=1918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:51.559000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 20 02:43:51.603000 audit[1920]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1920 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:43:51.603000 audit[1920]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe4778d550 a2=0 a3=0 items=0 ppid=1859 pid=1920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:51.603000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 20 02:43:51.645000 audit[1922]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1922 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:43:51.645000 audit[1922]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd97c107c0 a2=0 a3=0 items=0 ppid=1859 pid=1922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:51.645000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 02:43:51.685000 audit[1924]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1924 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:43:51.685000 audit[1924]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffb0d3a5e0 a2=0 a3=0 items=0 ppid=1859 pid=1924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:51.685000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 02:43:51.712000 audit[1926]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1926 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:43:51.712000 audit[1926]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffdf4cb9210 a2=0 a3=0 items=0 ppid=1859 pid=1926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:51.712000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 20 02:43:51.991000 audit[1929]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1929 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:43:51.991000 audit[1929]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffc9576b6f0 a2=0 a3=0 items=0 ppid=1859 pid=1929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:51.991000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 20 02:43:52.050000 audit[1931]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1931 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:43:52.050000 audit[1931]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffd0d278d0 a2=0 a3=0 items=0 ppid=1859 pid=1931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:52.050000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 20 02:43:52.096000 audit[1933]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1933 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:43:52.096000 audit[1933]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fffc32068b0 a2=0 a3=0 items=0 ppid=1859 pid=1933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:52.096000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 20 02:43:52.120000 audit[1935]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1935 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:43:52.120000 audit[1935]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc11c09680 a2=0 a3=0 items=0 ppid=1859 pid=1935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:52.120000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 02:43:52.159000 audit[1937]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1937 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:43:52.159000 audit[1937]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffda88c9e00 a2=0 a3=0 items=0 ppid=1859 pid=1937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:52.159000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 20 02:43:52.728000 audit[1967]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1967 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:43:52.749574 kernel: kauditd_printk_skb: 50 callbacks suppressed Jan 20 02:43:52.749715 kernel: audit: type=1325 audit(1768877032.728:243): table=nat:15 family=10 entries=2 op=nft_register_chain pid=1967 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:43:52.728000 audit[1967]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffce6b05080 a2=0 a3=0 items=0 ppid=1859 pid=1967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:52.841455 kernel: audit: type=1300 audit(1768877032.728:243): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffce6b05080 a2=0 a3=0 items=0 ppid=1859 pid=1967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:52.846915 kernel: audit: type=1327 audit(1768877032.728:243): proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 20 02:43:52.728000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 20 02:43:52.869897 kernel: audit: type=1325 audit(1768877032.766:244): table=filter:16 family=10 entries=2 op=nft_register_chain pid=1969 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:43:52.766000 audit[1969]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1969 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:43:52.766000 audit[1969]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc9dde5160 a2=0 a3=0 items=0 ppid=1859 pid=1969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:52.910686 kernel: audit: type=1300 audit(1768877032.766:244): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc9dde5160 a2=0 a3=0 items=0 ppid=1859 pid=1969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:52.910826 kernel: audit: type=1327 audit(1768877032.766:244): proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 20 02:43:52.910858 kernel: audit: type=1325 audit(1768877032.790:245): table=filter:17 family=10 entries=1 op=nft_register_chain pid=1971 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:43:52.766000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 20 02:43:52.790000 audit[1971]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1971 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:43:52.919565 kernel: audit: type=1300 audit(1768877032.790:245): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd8b42150 a2=0 a3=0 items=0 ppid=1859 pid=1971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:52.790000 audit[1971]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd8b42150 a2=0 a3=0 items=0 ppid=1859 pid=1971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:52.939573 kernel: audit: type=1327 audit(1768877032.790:245): proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 20 02:43:52.790000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 20 02:43:52.946133 kernel: audit: type=1325 audit(1768877032.808:246): table=filter:18 family=10 entries=1 op=nft_register_chain pid=1973 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:43:52.808000 audit[1973]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1973 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:43:52.808000 audit[1973]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffee0f87b80 a2=0 a3=0 items=0 ppid=1859 pid=1973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:52.808000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 20 02:43:52.848000 audit[1975]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1975 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:43:52.848000 audit[1975]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe05e3ec50 a2=0 a3=0 items=0 ppid=1859 pid=1975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:52.848000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 20 02:43:52.876000 audit[1977]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1977 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:43:52.876000 audit[1977]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdf1efb9b0 a2=0 a3=0 items=0 ppid=1859 pid=1977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:52.876000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 02:43:52.903000 audit[1979]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1979 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:43:52.903000 audit[1979]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc5ea12d70 a2=0 a3=0 items=0 ppid=1859 pid=1979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:52.903000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 02:43:52.957000 audit[1981]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1981 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:43:52.957000 audit[1981]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff7bc2fd10 a2=0 a3=0 items=0 ppid=1859 pid=1981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:52.957000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 20 02:43:52.995000 audit[1983]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1983 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:43:52.995000 audit[1983]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffd7da3f2c0 a2=0 a3=0 items=0 ppid=1859 pid=1983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:52.995000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 20 02:43:53.008000 audit[1985]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1985 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:43:53.008000 audit[1985]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd1f390310 a2=0 a3=0 items=0 ppid=1859 pid=1985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:53.008000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 20 02:43:53.020000 audit[1987]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1987 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:43:53.020000 audit[1987]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fffad09d7e0 a2=0 a3=0 items=0 ppid=1859 pid=1987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:53.020000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 20 02:43:53.035000 audit[1989]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1989 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:43:53.035000 audit[1989]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc1a56ff50 a2=0 a3=0 items=0 ppid=1859 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:53.035000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 02:43:53.044000 audit[1991]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1991 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:43:53.044000 audit[1991]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff09f0a0c0 a2=0 a3=0 items=0 ppid=1859 pid=1991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:53.044000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 20 02:43:53.117000 audit[1996]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1996 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:43:53.117000 audit[1996]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc3708d690 a2=0 a3=0 items=0 ppid=1859 pid=1996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:53.117000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 20 02:43:53.125000 audit[1998]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1998 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:43:53.125000 audit[1998]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc185358f0 a2=0 a3=0 items=0 ppid=1859 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:53.125000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 20 02:43:53.129000 audit[2000]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2000 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:43:53.129000 audit[2000]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd0a4eea80 a2=0 a3=0 items=0 ppid=1859 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:53.129000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 20 02:43:53.157000 audit[2002]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2002 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:43:53.157000 audit[2002]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe608adbc0 a2=0 a3=0 items=0 ppid=1859 pid=2002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:53.157000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 20 02:43:53.180000 audit[2004]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2004 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:43:53.180000 audit[2004]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff6a2df030 a2=0 a3=0 items=0 ppid=1859 pid=2004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:53.180000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 20 02:43:53.209000 audit[2006]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2006 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:43:53.209000 audit[2006]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd05627ae0 a2=0 a3=0 items=0 ppid=1859 pid=2006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:53.209000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 20 02:43:53.358000 audit[2011]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2011 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:43:53.358000 audit[2011]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7fffc05cf1c0 a2=0 a3=0 items=0 ppid=1859 pid=2011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:53.358000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 20 02:43:53.406000 audit[2013]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2013 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:43:53.406000 audit[2013]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff6b19f8e0 a2=0 a3=0 items=0 ppid=1859 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:53.406000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 20 02:43:53.473000 audit[2021]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:43:53.473000 audit[2021]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffcf615d9a0 a2=0 a3=0 items=0 ppid=1859 pid=2021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:53.473000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 20 02:43:53.569000 audit[2027]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:43:53.569000 audit[2027]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc908adad0 a2=0 a3=0 items=0 ppid=1859 pid=2027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:53.569000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 20 02:43:53.575000 audit[2029]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:43:53.575000 audit[2029]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffd4c249120 a2=0 a3=0 items=0 ppid=1859 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:53.575000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 20 02:43:53.586000 audit[2031]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2031 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:43:53.586000 audit[2031]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdc5c622f0 a2=0 a3=0 items=0 ppid=1859 pid=2031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:53.586000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 20 02:43:53.594000 audit[2033]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2033 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:43:53.594000 audit[2033]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fff0b7b0e40 a2=0 a3=0 items=0 ppid=1859 pid=2033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:53.594000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 02:43:53.603000 audit[2035]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:43:53.603000 audit[2035]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe97494830 a2=0 a3=0 items=0 ppid=1859 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:43:53.603000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 20 02:43:53.608366 systemd-networkd[1544]: docker0: Link UP Jan 20 02:43:53.642712 dockerd[1859]: time="2026-01-20T02:43:53.642564087Z" level=info msg="Loading containers: done." Jan 20 02:43:53.893673 dockerd[1859]: time="2026-01-20T02:43:53.890141921Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 02:43:53.893673 dockerd[1859]: time="2026-01-20T02:43:53.891870594Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 20 02:43:53.896977 dockerd[1859]: time="2026-01-20T02:43:53.896109628Z" level=info msg="Initializing buildkit" Jan 20 02:43:54.164884 dockerd[1859]: time="2026-01-20T02:43:54.164435837Z" level=info msg="Completed buildkit initialization" Jan 20 02:43:54.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:54.235047 dockerd[1859]: time="2026-01-20T02:43:54.228878209Z" level=info msg="Daemon has completed initialization" Jan 20 02:43:54.235047 dockerd[1859]: time="2026-01-20T02:43:54.232108267Z" level=info msg="API listen on /run/docker.sock" Jan 20 02:43:54.230997 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 02:43:56.895258 containerd[1640]: time="2026-01-20T02:43:56.894679174Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 20 02:43:57.066772 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 02:43:57.087209 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:43:58.038795 kernel: kauditd_printk_skb: 72 callbacks suppressed Jan 20 02:43:58.039020 kernel: audit: type=1130 audit(1768877038.020:271): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:58.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:43:58.021393 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:43:58.102766 (kubelet)[2088]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:43:58.476266 kubelet[2088]: E0120 02:43:58.471771 2088 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:43:58.491974 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:43:58.492237 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:43:58.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:43:58.495124 systemd[1]: kubelet.service: Consumed 433ms CPU time, 110.5M memory peak. Jan 20 02:43:58.545923 kernel: audit: type=1131 audit(1768877038.488:272): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:43:59.426796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3274668971.mount: Deactivated successfully. Jan 20 02:44:06.420933 containerd[1640]: time="2026-01-20T02:44:06.419591265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:44:06.428133 containerd[1640]: time="2026-01-20T02:44:06.427634056Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=26093127" Jan 20 02:44:06.432162 containerd[1640]: time="2026-01-20T02:44:06.431258969Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:44:06.452466 containerd[1640]: time="2026-01-20T02:44:06.448624445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:44:06.456396 containerd[1640]: time="2026-01-20T02:44:06.454758565Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 9.560029718s" Jan 20 02:44:06.456396 containerd[1640]: time="2026-01-20T02:44:06.454848184Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 20 02:44:06.466587 containerd[1640]: time="2026-01-20T02:44:06.463107887Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 20 02:44:08.576114 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 02:44:08.599543 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:44:09.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:44:09.445992 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:44:09.493845 kernel: audit: type=1130 audit(1768877049.445:273): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:44:09.486007 (kubelet)[2162]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:44:10.001968 kubelet[2162]: E0120 02:44:09.999420 2162 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:44:10.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:44:10.018460 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:44:10.018837 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:44:10.019624 systemd[1]: kubelet.service: Consumed 417ms CPU time, 109.8M memory peak. Jan 20 02:44:10.078415 kernel: audit: type=1131 audit(1768877050.017:274): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:44:19.013345 update_engine[1617]: I20260120 02:44:18.980844 1617 update_attempter.cc:509] Updating boot flags... Jan 20 02:44:20.091715 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 20 02:44:20.172678 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:44:21.036730 containerd[1640]: time="2026-01-20T02:44:21.036549734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:44:21.051681 containerd[1640]: time="2026-01-20T02:44:21.051558259Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21154285" Jan 20 02:44:21.090552 containerd[1640]: time="2026-01-20T02:44:21.087358550Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:44:21.129314 containerd[1640]: time="2026-01-20T02:44:21.129201308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:44:21.169407 containerd[1640]: time="2026-01-20T02:44:21.134807027Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 14.671653088s" Jan 20 02:44:21.170936 containerd[1640]: time="2026-01-20T02:44:21.170853419Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 20 02:44:21.177909 containerd[1640]: time="2026-01-20T02:44:21.177697974Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 20 02:44:22.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:44:22.327302 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:44:22.387236 kernel: audit: type=1130 audit(1768877062.326:275): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:44:22.397317 (kubelet)[2201]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:44:22.931895 kubelet[2201]: E0120 02:44:22.928362 2201 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:44:22.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:44:22.946706 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:44:22.981237 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:44:22.988284 systemd[1]: kubelet.service: Consumed 723ms CPU time, 108.9M memory peak. Jan 20 02:44:23.029553 kernel: audit: type=1131 audit(1768877062.982:276): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:44:31.943955 containerd[1640]: time="2026-01-20T02:44:31.940026464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:44:31.988412 containerd[1640]: time="2026-01-20T02:44:31.987680587Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15717792" Jan 20 02:44:32.012021 containerd[1640]: time="2026-01-20T02:44:32.006733870Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:44:32.082885 containerd[1640]: time="2026-01-20T02:44:32.081093951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:44:32.095797 containerd[1640]: time="2026-01-20T02:44:32.094317175Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 10.916545558s" Jan 20 02:44:32.095797 containerd[1640]: time="2026-01-20T02:44:32.094422032Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 20 02:44:32.103565 containerd[1640]: time="2026-01-20T02:44:32.103421125Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 20 02:44:33.087377 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 20 02:44:33.133876 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:44:35.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:44:35.642731 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:44:35.676698 kernel: audit: type=1130 audit(1768877075.641:277): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:44:35.681064 (kubelet)[2222]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:44:35.998054 kubelet[2222]: E0120 02:44:35.997166 2222 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:44:36.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:44:36.012249 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:44:36.012629 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:44:36.013288 systemd[1]: kubelet.service: Consumed 534ms CPU time, 110.4M memory peak. Jan 20 02:44:36.063326 kernel: audit: type=1131 audit(1768877076.011:278): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:44:39.111074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount849630459.mount: Deactivated successfully. Jan 20 02:44:43.483926 containerd[1640]: time="2026-01-20T02:44:43.481006216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:44:43.488963 containerd[1640]: time="2026-01-20T02:44:43.488904959Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25963572" Jan 20 02:44:43.495871 containerd[1640]: time="2026-01-20T02:44:43.492590716Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:44:43.503529 containerd[1640]: time="2026-01-20T02:44:43.502012247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:44:43.506808 containerd[1640]: time="2026-01-20T02:44:43.505144879Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 11.401670799s" Jan 20 02:44:43.506808 containerd[1640]: time="2026-01-20T02:44:43.505187445Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 20 02:44:43.513117 containerd[1640]: time="2026-01-20T02:44:43.512648190Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 20 02:44:47.066278 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 20 02:44:47.072127 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:44:47.298848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1857953271.mount: Deactivated successfully. Jan 20 02:44:50.785776 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:44:50.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:44:50.830365 kernel: audit: type=1130 audit(1768877090.792:279): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:44:50.841103 (kubelet)[2258]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:44:52.630048 kubelet[2258]: E0120 02:44:52.625219 2258 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:44:52.743451 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:44:52.802075 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:44:52.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:44:52.877087 kernel: audit: type=1131 audit(1768877092.811:280): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:44:52.813846 systemd[1]: kubelet.service: Consumed 2.596s CPU time, 111.3M memory peak. Jan 20 02:45:03.316538 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 20 02:45:03.429358 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:45:08.714089 containerd[1640]: time="2026-01-20T02:45:08.707405455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:45:08.931791 containerd[1640]: time="2026-01-20T02:45:08.926137739Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22377113" Jan 20 02:45:08.937247 containerd[1640]: time="2026-01-20T02:45:08.934105009Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:45:08.961166 containerd[1640]: time="2026-01-20T02:45:08.960850865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:45:09.002833 containerd[1640]: time="2026-01-20T02:45:08.997813718Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 25.485069304s" Jan 20 02:45:09.002833 containerd[1640]: time="2026-01-20T02:45:08.997879118Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 20 02:45:09.003322 containerd[1640]: time="2026-01-20T02:45:09.003287705Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 20 02:45:09.221413 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:45:09.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:45:09.283991 kernel: audit: type=1130 audit(1768877109.221:281): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:45:09.312749 (kubelet)[2315]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:45:10.447763 kubelet[2315]: E0120 02:45:10.431261 2315 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:45:10.500891 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:45:10.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:45:10.501171 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:45:10.502677 systemd[1]: kubelet.service: Consumed 1.585s CPU time, 110.5M memory peak. Jan 20 02:45:10.560878 kernel: audit: type=1131 audit(1768877110.501:282): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:45:10.831405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1908086559.mount: Deactivated successfully. Jan 20 02:45:10.880598 containerd[1640]: time="2026-01-20T02:45:10.879319517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:45:10.888569 containerd[1640]: time="2026-01-20T02:45:10.888352229Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=2907" Jan 20 02:45:10.904775 containerd[1640]: time="2026-01-20T02:45:10.900820002Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:45:10.923845 containerd[1640]: time="2026-01-20T02:45:10.921270043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:45:10.928222 containerd[1640]: time="2026-01-20T02:45:10.927931293Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.924325905s" Jan 20 02:45:10.928222 containerd[1640]: time="2026-01-20T02:45:10.927974151Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 20 02:45:10.939221 containerd[1640]: time="2026-01-20T02:45:10.936159191Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 20 02:45:14.073745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1394174778.mount: Deactivated successfully. Jan 20 02:45:20.690380 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 20 02:45:20.727392 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:45:22.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:45:22.816693 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:45:22.862169 kernel: audit: type=1130 audit(1768877122.815:283): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:45:22.865828 (kubelet)[2388]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:45:23.177711 kubelet[2388]: E0120 02:45:23.177538 2388 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:45:23.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:45:23.182530 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:45:23.182802 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:45:23.187790 systemd[1]: kubelet.service: Consumed 664ms CPU time, 110.4M memory peak. Jan 20 02:45:23.221922 kernel: audit: type=1131 audit(1768877123.178:284): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:45:34.307035 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 20 02:45:34.329270 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:45:36.323384 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:45:36.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:45:36.431571 kernel: audit: type=1130 audit(1768877136.333:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:45:37.880002 (kubelet)[2405]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:45:39.937870 kubelet[2405]: E0120 02:45:39.937096 2405 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:45:39.985064 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:45:39.987801 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:45:39.999125 systemd[1]: kubelet.service: Consumed 1.071s CPU time, 112.1M memory peak. Jan 20 02:45:39.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:45:40.070150 kernel: audit: type=1131 audit(1768877139.998:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:45:46.125230 containerd[1640]: time="2026-01-20T02:45:46.123087002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:45:46.147594 containerd[1640]: time="2026-01-20T02:45:46.147319816Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74156876" Jan 20 02:45:46.162608 containerd[1640]: time="2026-01-20T02:45:46.160412322Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:45:46.169669 containerd[1640]: time="2026-01-20T02:45:46.166998485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:45:46.169669 containerd[1640]: time="2026-01-20T02:45:46.169395374Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 35.230677859s" Jan 20 02:45:46.177760 containerd[1640]: time="2026-01-20T02:45:46.175934059Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 20 02:45:50.067846 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 20 02:45:50.108652 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:45:52.398406 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:45:52.595832 kernel: audit: type=1130 audit(1768877152.494:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:45:52.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:45:52.636901 (kubelet)[2448]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:45:53.771925 kubelet[2448]: E0120 02:45:53.771014 2448 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:45:53.809008 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:45:53.809291 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:45:53.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:45:53.880116 systemd[1]: kubelet.service: Consumed 921ms CPU time, 109.7M memory peak. Jan 20 02:45:53.930028 kernel: audit: type=1131 audit(1768877153.862:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:46:03.873831 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 20 02:46:03.959376 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:46:06.808923 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:46:06.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:46:06.884282 kernel: audit: type=1130 audit(1768877166.806:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:46:06.962364 (kubelet)[2465]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:46:07.959857 kubelet[2465]: E0120 02:46:07.943246 2465 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:46:07.977330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:46:07.984258 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:46:07.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:46:07.997741 systemd[1]: kubelet.service: Consumed 823ms CPU time, 110M memory peak. Jan 20 02:46:08.059849 kernel: audit: type=1131 audit(1768877167.985:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:46:11.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:46:11.906029 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:46:11.906321 systemd[1]: kubelet.service: Consumed 823ms CPU time, 110M memory peak. Jan 20 02:46:11.939131 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:46:11.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:46:11.983159 kernel: audit: type=1130 audit(1768877171.904:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:46:11.983315 kernel: audit: type=1131 audit(1768877171.904:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:46:12.325977 systemd[1]: Reload requested from client PID 2481 ('systemctl') (unit session-7.scope)... Jan 20 02:46:12.326034 systemd[1]: Reloading... Jan 20 02:46:12.768552 zram_generator::config[2527]: No configuration found. Jan 20 02:46:13.845642 systemd[1]: Reloading finished in 1513 ms. Jan 20 02:46:13.990207 kernel: audit: type=1334 audit(1768877173.945:293): prog-id=63 op=LOAD Jan 20 02:46:13.945000 audit: BPF prog-id=63 op=LOAD Jan 20 02:46:13.945000 audit: BPF prog-id=58 op=UNLOAD Jan 20 02:46:13.978000 audit: BPF prog-id=64 op=LOAD Jan 20 02:46:14.007613 kernel: audit: type=1334 audit(1768877173.945:294): prog-id=58 op=UNLOAD Jan 20 02:46:14.007707 kernel: audit: type=1334 audit(1768877173.978:295): prog-id=64 op=LOAD Jan 20 02:46:13.978000 audit: BPF prog-id=59 op=UNLOAD Jan 20 02:46:14.016000 audit: BPF prog-id=65 op=LOAD Jan 20 02:46:14.043615 kernel: audit: type=1334 audit(1768877173.978:296): prog-id=59 op=UNLOAD Jan 20 02:46:14.043721 kernel: audit: type=1334 audit(1768877174.016:297): prog-id=65 op=LOAD Jan 20 02:46:14.016000 audit: BPF prog-id=49 op=UNLOAD Jan 20 02:46:14.056960 kernel: audit: type=1334 audit(1768877174.016:298): prog-id=49 op=UNLOAD Jan 20 02:46:14.016000 audit: BPF prog-id=66 op=LOAD Jan 20 02:46:14.073866 kernel: audit: type=1334 audit(1768877174.016:299): prog-id=66 op=LOAD Jan 20 02:46:14.016000 audit: BPF prog-id=67 op=LOAD Jan 20 02:46:14.097717 kernel: audit: type=1334 audit(1768877174.016:300): prog-id=67 op=LOAD Jan 20 02:46:14.016000 audit: BPF prog-id=50 op=UNLOAD Jan 20 02:46:14.016000 audit: BPF prog-id=51 op=UNLOAD Jan 20 02:46:14.028000 audit: BPF prog-id=68 op=LOAD Jan 20 02:46:14.028000 audit: BPF prog-id=43 op=UNLOAD Jan 20 02:46:14.028000 audit: BPF prog-id=69 op=LOAD Jan 20 02:46:14.028000 audit: BPF prog-id=70 op=LOAD Jan 20 02:46:14.028000 audit: BPF prog-id=44 op=UNLOAD Jan 20 02:46:14.028000 audit: BPF prog-id=45 op=UNLOAD Jan 20 02:46:14.043000 audit: BPF prog-id=71 op=LOAD Jan 20 02:46:14.043000 audit: BPF prog-id=60 op=UNLOAD Jan 20 02:46:14.052000 audit: BPF prog-id=72 op=LOAD Jan 20 02:46:14.052000 audit: BPF prog-id=73 op=LOAD Jan 20 02:46:14.052000 audit: BPF prog-id=61 op=UNLOAD Jan 20 02:46:14.052000 audit: BPF prog-id=62 op=UNLOAD Jan 20 02:46:14.057000 audit: BPF prog-id=74 op=LOAD Jan 20 02:46:14.057000 audit: BPF prog-id=55 op=UNLOAD Jan 20 02:46:14.057000 audit: BPF prog-id=75 op=LOAD Jan 20 02:46:14.057000 audit: BPF prog-id=76 op=LOAD Jan 20 02:46:14.057000 audit: BPF prog-id=56 op=UNLOAD Jan 20 02:46:14.057000 audit: BPF prog-id=57 op=UNLOAD Jan 20 02:46:14.079000 audit: BPF prog-id=77 op=LOAD Jan 20 02:46:14.079000 audit: BPF prog-id=48 op=UNLOAD Jan 20 02:46:14.079000 audit: BPF prog-id=78 op=LOAD Jan 20 02:46:14.079000 audit: BPF prog-id=52 op=UNLOAD Jan 20 02:46:14.079000 audit: BPF prog-id=79 op=LOAD Jan 20 02:46:14.079000 audit: BPF prog-id=80 op=LOAD Jan 20 02:46:14.079000 audit: BPF prog-id=53 op=UNLOAD Jan 20 02:46:14.088000 audit: BPF prog-id=54 op=UNLOAD Jan 20 02:46:14.088000 audit: BPF prog-id=81 op=LOAD Jan 20 02:46:14.088000 audit: BPF prog-id=82 op=LOAD Jan 20 02:46:14.088000 audit: BPF prog-id=46 op=UNLOAD Jan 20 02:46:14.088000 audit: BPF prog-id=47 op=UNLOAD Jan 20 02:46:14.191250 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 02:46:14.193229 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 02:46:14.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:46:14.200275 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:46:14.200395 systemd[1]: kubelet.service: Consumed 315ms CPU time, 98.3M memory peak. Jan 20 02:46:14.221923 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:46:15.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:46:15.475593 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:46:15.559326 (kubelet)[2576]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 02:46:15.971603 kubelet[2576]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 02:46:15.971603 kubelet[2576]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 02:46:15.971603 kubelet[2576]: I0120 02:46:15.969628 2576 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 02:46:18.683693 kubelet[2576]: I0120 02:46:18.680281 2576 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 20 02:46:18.683693 kubelet[2576]: I0120 02:46:18.684018 2576 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 02:46:18.683693 kubelet[2576]: I0120 02:46:18.684163 2576 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 20 02:46:18.683693 kubelet[2576]: I0120 02:46:18.684182 2576 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 02:46:18.687237 kubelet[2576]: I0120 02:46:18.685681 2576 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 02:46:18.818800 kubelet[2576]: E0120 02:46:18.818592 2576 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.129:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 02:46:18.840579 kubelet[2576]: I0120 02:46:18.837827 2576 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 02:46:18.873073 kubelet[2576]: I0120 02:46:18.865335 2576 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 02:46:18.910451 kubelet[2576]: I0120 02:46:18.907383 2576 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 20 02:46:18.914808 kubelet[2576]: I0120 02:46:18.913724 2576 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 02:46:18.914808 kubelet[2576]: I0120 02:46:18.913795 2576 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 02:46:18.914808 kubelet[2576]: I0120 02:46:18.914003 2576 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 02:46:18.914808 kubelet[2576]: I0120 02:46:18.914021 2576 container_manager_linux.go:306] "Creating device plugin manager" Jan 20 02:46:18.915696 kubelet[2576]: I0120 02:46:18.914188 2576 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 20 02:46:18.924551 kubelet[2576]: I0120 02:46:18.922280 2576 state_mem.go:36] "Initialized new in-memory state store" Jan 20 02:46:18.924551 kubelet[2576]: I0120 02:46:18.922712 2576 kubelet.go:475] "Attempting to sync node with API server" Jan 20 02:46:18.924551 kubelet[2576]: I0120 02:46:18.922732 2576 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 02:46:18.924551 kubelet[2576]: I0120 02:46:18.922764 2576 kubelet.go:387] "Adding apiserver pod source" Jan 20 02:46:18.924551 kubelet[2576]: I0120 02:46:18.922787 2576 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 02:46:18.930662 kubelet[2576]: E0120 02:46:18.925748 2576 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.129:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 02:46:18.930662 kubelet[2576]: E0120 02:46:18.926228 2576 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.129:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 02:46:18.942566 kubelet[2576]: I0120 02:46:18.941738 2576 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 02:46:18.951124 kubelet[2576]: I0120 02:46:18.948722 2576 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 02:46:18.951124 kubelet[2576]: I0120 02:46:18.948791 2576 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 20 02:46:18.951124 kubelet[2576]: W0120 02:46:18.948905 2576 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 02:46:18.976949 kubelet[2576]: I0120 02:46:18.976801 2576 server.go:1262] "Started kubelet" Jan 20 02:46:18.981389 kubelet[2576]: I0120 02:46:18.981022 2576 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 02:46:18.983541 kubelet[2576]: I0120 02:46:18.981852 2576 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 02:46:18.983541 kubelet[2576]: I0120 02:46:18.983148 2576 server.go:310] "Adding debug handlers to kubelet server" Jan 20 02:46:19.000974 kubelet[2576]: I0120 02:46:18.997547 2576 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 20 02:46:19.000974 kubelet[2576]: I0120 02:46:18.998276 2576 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 02:46:19.000974 kubelet[2576]: I0120 02:46:19.000329 2576 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 02:46:19.021579 kubelet[2576]: I0120 02:46:19.018730 2576 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 02:46:19.041938 kubelet[2576]: I0120 02:46:19.039691 2576 factory.go:223] Registration of the systemd container factory successfully Jan 20 02:46:19.041938 kubelet[2576]: I0120 02:46:19.039833 2576 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 02:46:19.056843 kubelet[2576]: I0120 02:46:19.045913 2576 factory.go:223] Registration of the containerd container factory successfully Jan 20 02:46:19.067587 kubelet[2576]: I0120 02:46:19.057210 2576 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 20 02:46:19.067587 kubelet[2576]: E0120 02:46:19.057383 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:19.067587 kubelet[2576]: I0120 02:46:19.058204 2576 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 20 02:46:19.067587 kubelet[2576]: I0120 02:46:19.058266 2576 reconciler.go:29] "Reconciler: start to sync state" Jan 20 02:46:19.074543 kubelet[2576]: E0120 02:46:19.036936 2576 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.129:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.129:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c506c1fc379f9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 02:46:18.969356793 +0000 UTC m=+3.377476791,LastTimestamp:2026-01-20 02:46:18.969356793 +0000 UTC m=+3.377476791,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 02:46:19.161847 kubelet[2576]: E0120 02:46:19.160837 2576 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.129:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 02:46:19.161847 kubelet[2576]: E0120 02:46:19.161020 2576 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.129:6443: connect: connection refused" interval="200ms" Jan 20 02:46:19.189918 kubelet[2576]: E0120 02:46:19.189745 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:19.277000 audit[2595]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2595 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:46:19.312798 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 20 02:46:19.312938 kernel: audit: type=1325 audit(1768877179.277:335): table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2595 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:46:19.313569 kubelet[2576]: E0120 02:46:19.313538 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:19.277000 audit[2595]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd8740da40 a2=0 a3=0 items=0 ppid=2576 pid=2595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:19.381345 kernel: audit: type=1300 audit(1768877179.277:335): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd8740da40 a2=0 a3=0 items=0 ppid=2576 pid=2595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:19.391830 kernel: audit: type=1327 audit(1768877179.277:335): proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 20 02:46:19.277000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 20 02:46:19.398849 kubelet[2576]: E0120 02:46:19.398284 2576 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.129:6443: connect: connection refused" interval="400ms" Jan 20 02:46:19.403000 audit[2597]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2597 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:46:19.415618 kubelet[2576]: E0120 02:46:19.415568 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:19.430899 kubelet[2576]: I0120 02:46:19.426133 2576 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 02:46:19.430899 kubelet[2576]: I0120 02:46:19.426152 2576 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 02:46:19.430899 kubelet[2576]: I0120 02:46:19.426173 2576 state_mem.go:36] "Initialized new in-memory state store" Jan 20 02:46:19.441437 kernel: audit: type=1325 audit(1768877179.403:336): table=filter:43 family=2 entries=1 op=nft_register_chain pid=2597 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:46:19.446578 kernel: audit: type=1300 audit(1768877179.403:336): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff6d2f6360 a2=0 a3=0 items=0 ppid=2576 pid=2597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:19.403000 audit[2597]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff6d2f6360 a2=0 a3=0 items=0 ppid=2576 pid=2597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:19.466068 kubelet[2576]: I0120 02:46:19.463859 2576 policy_none.go:49] "None policy: Start" Jan 20 02:46:19.466068 kubelet[2576]: I0120 02:46:19.465603 2576 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 20 02:46:19.472205 kubelet[2576]: I0120 02:46:19.470907 2576 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 20 02:46:19.492796 kubelet[2576]: I0120 02:46:19.484646 2576 policy_none.go:47] "Start" Jan 20 02:46:19.403000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 20 02:46:19.523545 kubelet[2576]: E0120 02:46:19.518044 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:19.529514 kernel: audit: type=1327 audit(1768877179.403:336): proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 20 02:46:19.529613 kernel: audit: type=1325 audit(1768877179.431:337): table=filter:44 family=2 entries=2 op=nft_register_chain pid=2601 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:46:19.431000 audit[2601]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2601 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:46:19.533819 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 02:46:19.431000 audit[2601]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe1158ffd0 a2=0 a3=0 items=0 ppid=2576 pid=2601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:19.662116 kubelet[2576]: E0120 02:46:19.660961 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:19.713785 kernel: audit: type=1300 audit(1768877179.431:337): arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe1158ffd0 a2=0 a3=0 items=0 ppid=2576 pid=2601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:19.717647 kernel: audit: type=1327 audit(1768877179.431:337): proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 02:46:19.431000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 02:46:19.468000 audit[2603]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2603 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:46:19.774802 kubelet[2576]: E0120 02:46:19.774675 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:19.791632 kernel: audit: type=1325 audit(1768877179.468:338): table=filter:45 family=2 entries=2 op=nft_register_chain pid=2603 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:46:19.468000 audit[2603]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc4cb1a410 a2=0 a3=0 items=0 ppid=2576 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:19.468000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 02:46:19.810885 kubelet[2576]: E0120 02:46:19.810702 2576 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.129:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 02:46:19.816866 kubelet[2576]: E0120 02:46:19.811721 2576 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.129:6443: connect: connection refused" interval="800ms" Jan 20 02:46:19.888524 kubelet[2576]: E0120 02:46:19.887796 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:20.002371 kubelet[2576]: E0120 02:46:19.998892 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:20.002371 kubelet[2576]: E0120 02:46:19.999595 2576 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.129:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 02:46:20.015000 audit[2608]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2608 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:46:20.015000 audit[2608]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff40ad00a0 a2=0 a3=0 items=0 ppid=2576 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:20.015000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Jan 20 02:46:20.020000 audit[2609]: NETFILTER_CFG table=mangle:47 family=2 entries=1 op=nft_register_chain pid=2609 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:46:20.020000 audit[2609]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff26281840 a2=0 a3=0 items=0 ppid=2576 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:20.020000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 20 02:46:20.020000 audit[2610]: NETFILTER_CFG table=mangle:48 family=10 entries=2 op=nft_register_chain pid=2610 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:46:20.020000 audit[2610]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc211922e0 a2=0 a3=0 items=0 ppid=2576 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:20.020000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 20 02:46:20.020000 audit[2611]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2611 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:46:20.020000 audit[2611]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc3cf42060 a2=0 a3=0 items=0 ppid=2576 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:20.020000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 20 02:46:20.029180 kubelet[2576]: I0120 02:46:20.018165 2576 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 20 02:46:20.029180 kubelet[2576]: I0120 02:46:20.028789 2576 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 20 02:46:20.029180 kubelet[2576]: I0120 02:46:20.028837 2576 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 20 02:46:20.029180 kubelet[2576]: I0120 02:46:20.028867 2576 kubelet.go:2427] "Starting kubelet main sync loop" Jan 20 02:46:20.029180 kubelet[2576]: E0120 02:46:20.028999 2576 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 02:46:20.037000 audit[2613]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2613 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:46:20.037000 audit[2613]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc4c1dea20 a2=0 a3=0 items=0 ppid=2576 pid=2613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:20.037000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 20 02:46:20.037000 audit[2612]: NETFILTER_CFG table=mangle:51 family=10 entries=1 op=nft_register_chain pid=2612 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:46:20.037000 audit[2612]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffb62c3960 a2=0 a3=0 items=0 ppid=2576 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:20.037000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 20 02:46:20.066216 kubelet[2576]: E0120 02:46:20.066172 2576 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.129:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 02:46:20.066000 audit[2614]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2614 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:46:20.066000 audit[2614]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe13525a90 a2=0 a3=0 items=0 ppid=2576 pid=2614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:20.066000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 20 02:46:20.076792 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 02:46:20.093588 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 02:46:20.090000 audit[2615]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2615 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:46:20.090000 audit[2615]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe63e76d00 a2=0 a3=0 items=0 ppid=2576 pid=2615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:20.090000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 20 02:46:20.111912 kubelet[2576]: E0120 02:46:20.111749 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:20.131167 kubelet[2576]: E0120 02:46:20.129712 2576 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 02:46:20.143577 kubelet[2576]: E0120 02:46:20.143539 2576 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 02:46:20.144014 kubelet[2576]: I0120 02:46:20.143997 2576 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 02:46:20.144135 kubelet[2576]: I0120 02:46:20.144085 2576 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 02:46:20.163750 kubelet[2576]: I0120 02:46:20.147342 2576 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 02:46:20.163750 kubelet[2576]: E0120 02:46:20.154802 2576 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 02:46:20.163750 kubelet[2576]: E0120 02:46:20.154848 2576 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 02:46:20.288181 kubelet[2576]: I0120 02:46:20.277225 2576 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:46:20.288181 kubelet[2576]: E0120 02:46:20.277753 2576 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.129:6443/api/v1/nodes\": dial tcp 10.0.0.129:6443: connect: connection refused" node="localhost" Jan 20 02:46:20.393942 kubelet[2576]: I0120 02:46:20.393724 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b9f0aa8a494f9d9e7199810b71439597-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b9f0aa8a494f9d9e7199810b71439597\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:46:20.395718 kubelet[2576]: I0120 02:46:20.395682 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b9f0aa8a494f9d9e7199810b71439597-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b9f0aa8a494f9d9e7199810b71439597\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:46:20.395831 kubelet[2576]: I0120 02:46:20.395798 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b9f0aa8a494f9d9e7199810b71439597-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b9f0aa8a494f9d9e7199810b71439597\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:46:20.484453 kubelet[2576]: E0120 02:46:20.479020 2576 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.129:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 02:46:20.504955 kubelet[2576]: I0120 02:46:20.498095 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:46:20.504955 kubelet[2576]: I0120 02:46:20.498317 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:46:20.504955 kubelet[2576]: I0120 02:46:20.498563 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:46:20.504955 kubelet[2576]: I0120 02:46:20.498592 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:46:20.504955 kubelet[2576]: I0120 02:46:20.498613 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:46:20.513693 kubelet[2576]: I0120 02:46:20.507468 2576 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:46:20.513693 kubelet[2576]: E0120 02:46:20.507737 2576 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.129:6443/api/v1/nodes\": dial tcp 10.0.0.129:6443: connect: connection refused" node="localhost" Jan 20 02:46:20.548642 systemd[1]: Created slice kubepods-burstable-podb9f0aa8a494f9d9e7199810b71439597.slice - libcontainer container kubepods-burstable-podb9f0aa8a494f9d9e7199810b71439597.slice. Jan 20 02:46:20.613250 kubelet[2576]: E0120 02:46:20.613187 2576 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.129:6443: connect: connection refused" interval="1.6s" Jan 20 02:46:20.641540 kubelet[2576]: I0120 02:46:20.623651 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 20 02:46:20.647424 kubelet[2576]: E0120 02:46:20.647300 2576 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:46:20.684745 kubelet[2576]: E0120 02:46:20.680844 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:46:20.716734 containerd[1640]: time="2026-01-20T02:46:20.716338812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b9f0aa8a494f9d9e7199810b71439597,Namespace:kube-system,Attempt:0,}" Jan 20 02:46:20.727593 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Jan 20 02:46:20.796190 kubelet[2576]: E0120 02:46:20.795569 2576 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:46:20.833369 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Jan 20 02:46:20.847190 kubelet[2576]: E0120 02:46:20.847146 2576 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.129:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 02:46:20.867921 kubelet[2576]: E0120 02:46:20.866571 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:46:20.868299 containerd[1640]: time="2026-01-20T02:46:20.868253142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Jan 20 02:46:20.878618 kubelet[2576]: E0120 02:46:20.877072 2576 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:46:20.909077 kubelet[2576]: E0120 02:46:20.896916 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:46:20.943176 containerd[1640]: time="2026-01-20T02:46:20.899384521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Jan 20 02:46:20.976591 kubelet[2576]: I0120 02:46:20.974563 2576 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:46:20.976591 kubelet[2576]: E0120 02:46:20.974940 2576 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.129:6443/api/v1/nodes\": dial tcp 10.0.0.129:6443: connect: connection refused" node="localhost" Jan 20 02:46:21.076612 kubelet[2576]: E0120 02:46:21.069046 2576 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.129:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 02:46:21.465553 kubelet[2576]: E0120 02:46:21.441592 2576 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.129:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 02:46:21.792723 kubelet[2576]: I0120 02:46:21.790247 2576 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:46:21.801559 kubelet[2576]: E0120 02:46:21.800325 2576 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.129:6443/api/v1/nodes\": dial tcp 10.0.0.129:6443: connect: connection refused" node="localhost" Jan 20 02:46:22.222693 kubelet[2576]: E0120 02:46:22.222624 2576 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.129:6443: connect: connection refused" interval="3.2s" Jan 20 02:46:22.632582 kubelet[2576]: E0120 02:46:22.622982 2576 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.129:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 02:46:22.638155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1649500863.mount: Deactivated successfully. Jan 20 02:46:22.693885 containerd[1640]: time="2026-01-20T02:46:22.692758370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 02:46:22.713754 containerd[1640]: time="2026-01-20T02:46:22.711554518Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 02:46:22.717032 kubelet[2576]: E0120 02:46:22.716037 2576 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.129:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 02:46:22.723070 containerd[1640]: time="2026-01-20T02:46:22.721891037Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=3286" Jan 20 02:46:22.725009 containerd[1640]: time="2026-01-20T02:46:22.724884915Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 20 02:46:22.736666 containerd[1640]: time="2026-01-20T02:46:22.735749964Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 02:46:22.764734 containerd[1640]: time="2026-01-20T02:46:22.763443311Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 02:46:22.782463 containerd[1640]: time="2026-01-20T02:46:22.778513554Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 20 02:46:22.782463 containerd[1640]: time="2026-01-20T02:46:22.781178923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 02:46:22.782463 containerd[1640]: time="2026-01-20T02:46:22.782169179Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.870351559s" Jan 20 02:46:22.787514 containerd[1640]: time="2026-01-20T02:46:22.785368649Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.040397057s" Jan 20 02:46:22.809520 containerd[1640]: time="2026-01-20T02:46:22.805675745Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.827559723s" Jan 20 02:46:22.843803 kubelet[2576]: E0120 02:46:22.843382 2576 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.129:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 02:46:23.015643 containerd[1640]: time="2026-01-20T02:46:23.012375282Z" level=info msg="connecting to shim c321302b230591fe92f1c95b2454afa8782776108ee5c461fe97a49eff68de3a" address="unix:///run/containerd/s/e2764ac7385c53efab5f06acfdb6c2d8c6528b375562f264850ff3db334598b1" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:46:23.672383 containerd[1640]: time="2026-01-20T02:46:23.645790115Z" level=info msg="connecting to shim 1bbf335a2d8fc1b04c520da65e5e107490fe4f07918d46bdbf8ab1e559d6f0e4" address="unix:///run/containerd/s/5f6b874d61086bd79b841c5df95c97f90874abb23338edce3dc8fc9a59a49f4d" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:46:24.292047 kubelet[2576]: I0120 02:46:24.289345 2576 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:46:24.311363 kubelet[2576]: E0120 02:46:24.311284 2576 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.129:6443/api/v1/nodes\": dial tcp 10.0.0.129:6443: connect: connection refused" node="localhost" Jan 20 02:46:24.586285 containerd[1640]: time="2026-01-20T02:46:24.586129524Z" level=info msg="connecting to shim 09236e09df2dbc6a03088a784da43990013854b6d9b698358b6653ef25e2fd9e" address="unix:///run/containerd/s/72ad155a1002ae1e4044a0a2f01775610dc1ba87da4cc340fcaf51ebcbdcf4c8" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:46:24.750012 systemd[1]: Started cri-containerd-1bbf335a2d8fc1b04c520da65e5e107490fe4f07918d46bdbf8ab1e559d6f0e4.scope - libcontainer container 1bbf335a2d8fc1b04c520da65e5e107490fe4f07918d46bdbf8ab1e559d6f0e4. Jan 20 02:46:24.889000 audit: BPF prog-id=83 op=LOAD Jan 20 02:46:24.907521 kernel: kauditd_printk_skb: 26 callbacks suppressed Jan 20 02:46:24.907667 kernel: audit: type=1334 audit(1768877184.889:347): prog-id=83 op=LOAD Jan 20 02:46:25.603304 kubelet[2576]: E0120 02:46:25.592306 2576 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.129:6443: connect: connection refused" interval="6.4s" Jan 20 02:46:25.603304 kubelet[2576]: E0120 02:46:25.593206 2576 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.129:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 02:46:25.603304 kubelet[2576]: E0120 02:46:25.598618 2576 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.129:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.129:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c506c1fc379f9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 02:46:18.969356793 +0000 UTC m=+3.377476791,LastTimestamp:2026-01-20 02:46:18.969356793 +0000 UTC m=+3.377476791,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 02:46:25.614158 kubelet[2576]: E0120 02:46:25.611291 2576 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.129:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 02:46:25.632874 systemd[1]: Started cri-containerd-09236e09df2dbc6a03088a784da43990013854b6d9b698358b6653ef25e2fd9e.scope - libcontainer container 09236e09df2dbc6a03088a784da43990013854b6d9b698358b6653ef25e2fd9e. Jan 20 02:46:25.632000 audit: BPF prog-id=84 op=LOAD Jan 20 02:46:25.632000 audit[2647]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001f8238 a2=98 a3=0 items=0 ppid=2637 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:25.681951 systemd[1]: Started cri-containerd-c321302b230591fe92f1c95b2454afa8782776108ee5c461fe97a49eff68de3a.scope - libcontainer container c321302b230591fe92f1c95b2454afa8782776108ee5c461fe97a49eff68de3a. Jan 20 02:46:25.717022 kernel: audit: type=1334 audit(1768877185.632:348): prog-id=84 op=LOAD Jan 20 02:46:25.717174 kernel: audit: type=1300 audit(1768877185.632:348): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001f8238 a2=98 a3=0 items=0 ppid=2637 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:25.632000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162626633333561326438666331623034633532306461363565356531 Jan 20 02:46:25.796731 kernel: audit: type=1327 audit(1768877185.632:348): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162626633333561326438666331623034633532306461363565356531 Jan 20 02:46:25.796937 kernel: audit: type=1334 audit(1768877185.632:349): prog-id=84 op=UNLOAD Jan 20 02:46:25.632000 audit: BPF prog-id=84 op=UNLOAD Jan 20 02:46:25.811544 kernel: audit: type=1300 audit(1768877185.632:349): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2637 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:25.632000 audit[2647]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2637 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:25.632000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162626633333561326438666331623034633532306461363565356531 Jan 20 02:46:25.918364 kernel: audit: type=1327 audit(1768877185.632:349): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162626633333561326438666331623034633532306461363565356531 Jan 20 02:46:25.633000 audit: BPF prog-id=85 op=LOAD Jan 20 02:46:25.633000 audit[2647]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001f8488 a2=98 a3=0 items=0 ppid=2637 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:25.948550 kernel: audit: type=1334 audit(1768877185.633:350): prog-id=85 op=LOAD Jan 20 02:46:25.948766 kernel: audit: type=1300 audit(1768877185.633:350): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001f8488 a2=98 a3=0 items=0 ppid=2637 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:25.633000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162626633333561326438666331623034633532306461363565356531 Jan 20 02:46:26.120109 kernel: audit: type=1327 audit(1768877185.633:350): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162626633333561326438666331623034633532306461363565356531 Jan 20 02:46:25.633000 audit: BPF prog-id=86 op=LOAD Jan 20 02:46:25.633000 audit[2647]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001f8218 a2=98 a3=0 items=0 ppid=2637 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:25.633000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162626633333561326438666331623034633532306461363565356531 Jan 20 02:46:25.633000 audit: BPF prog-id=86 op=UNLOAD Jan 20 02:46:25.633000 audit[2647]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2637 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:25.633000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162626633333561326438666331623034633532306461363565356531 Jan 20 02:46:25.633000 audit: BPF prog-id=85 op=UNLOAD Jan 20 02:46:25.633000 audit[2647]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2637 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:25.633000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162626633333561326438666331623034633532306461363565356531 Jan 20 02:46:25.633000 audit: BPF prog-id=87 op=LOAD Jan 20 02:46:25.633000 audit[2647]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001f86e8 a2=98 a3=0 items=0 ppid=2637 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:25.633000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162626633333561326438666331623034633532306461363565356531 Jan 20 02:46:25.930000 audit: BPF prog-id=88 op=LOAD Jan 20 02:46:25.937000 audit: BPF prog-id=89 op=LOAD Jan 20 02:46:25.937000 audit[2676]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2635 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:25.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333323133303262323330353931666539326631633935623234353461 Jan 20 02:46:25.937000 audit: BPF prog-id=89 op=UNLOAD Jan 20 02:46:25.937000 audit[2676]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2635 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:25.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333323133303262323330353931666539326631633935623234353461 Jan 20 02:46:25.937000 audit: BPF prog-id=90 op=LOAD Jan 20 02:46:25.937000 audit[2676]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2635 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:25.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333323133303262323330353931666539326631633935623234353461 Jan 20 02:46:25.937000 audit: BPF prog-id=91 op=LOAD Jan 20 02:46:25.937000 audit[2676]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2635 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:25.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333323133303262323330353931666539326631633935623234353461 Jan 20 02:46:25.937000 audit: BPF prog-id=91 op=UNLOAD Jan 20 02:46:25.937000 audit[2676]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2635 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:25.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333323133303262323330353931666539326631633935623234353461 Jan 20 02:46:25.938000 audit: BPF prog-id=90 op=UNLOAD Jan 20 02:46:25.938000 audit[2676]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2635 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:25.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333323133303262323330353931666539326631633935623234353461 Jan 20 02:46:25.938000 audit: BPF prog-id=92 op=LOAD Jan 20 02:46:25.938000 audit[2676]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2635 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:25.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333323133303262323330353931666539326631633935623234353461 Jan 20 02:46:26.419000 audit: BPF prog-id=93 op=LOAD Jan 20 02:46:26.812713 containerd[1640]: time="2026-01-20T02:46:26.776578124Z" level=error msg="get state for 1bbf335a2d8fc1b04c520da65e5e107490fe4f07918d46bdbf8ab1e559d6f0e4" error="context deadline exceeded" Jan 20 02:46:26.812713 containerd[1640]: time="2026-01-20T02:46:26.778175967Z" level=warning msg="unknown status" status=0 Jan 20 02:46:26.823000 audit: BPF prog-id=94 op=LOAD Jan 20 02:46:26.823000 audit[2690]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2675 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:26.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039323336653039646632646263366130333038386137383464613433 Jan 20 02:46:26.832000 audit: BPF prog-id=94 op=UNLOAD Jan 20 02:46:26.832000 audit[2690]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2675 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:26.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039323336653039646632646263366130333038386137383464613433 Jan 20 02:46:26.832000 audit: BPF prog-id=95 op=LOAD Jan 20 02:46:26.832000 audit[2690]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2675 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:26.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039323336653039646632646263366130333038386137383464613433 Jan 20 02:46:26.832000 audit: BPF prog-id=96 op=LOAD Jan 20 02:46:26.832000 audit[2690]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2675 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:26.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039323336653039646632646263366130333038386137383464613433 Jan 20 02:46:26.832000 audit: BPF prog-id=96 op=UNLOAD Jan 20 02:46:26.832000 audit[2690]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2675 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:26.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039323336653039646632646263366130333038386137383464613433 Jan 20 02:46:26.832000 audit: BPF prog-id=95 op=UNLOAD Jan 20 02:46:26.832000 audit[2690]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2675 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:26.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039323336653039646632646263366130333038386137383464613433 Jan 20 02:46:26.832000 audit: BPF prog-id=97 op=LOAD Jan 20 02:46:26.832000 audit[2690]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2675 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:26.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039323336653039646632646263366130333038386137383464613433 Jan 20 02:46:26.944344 containerd[1640]: time="2026-01-20T02:46:26.848360890Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 20 02:46:27.614673 kubelet[2576]: E0120 02:46:27.501047 2576 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.129:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 02:46:27.624886 kubelet[2576]: I0120 02:46:27.615529 2576 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:46:27.668101 kubelet[2576]: E0120 02:46:27.667785 2576 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.129:6443/api/v1/nodes\": dial tcp 10.0.0.129:6443: connect: connection refused" node="localhost" Jan 20 02:46:27.763575 kubelet[2576]: E0120 02:46:27.751009 2576 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.129:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 02:46:27.766003 containerd[1640]: time="2026-01-20T02:46:27.765821588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"1bbf335a2d8fc1b04c520da65e5e107490fe4f07918d46bdbf8ab1e559d6f0e4\"" Jan 20 02:46:27.781049 kubelet[2576]: E0120 02:46:27.780981 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:46:27.808234 containerd[1640]: time="2026-01-20T02:46:27.808184640Z" level=info msg="CreateContainer within sandbox \"1bbf335a2d8fc1b04c520da65e5e107490fe4f07918d46bdbf8ab1e559d6f0e4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 02:46:27.883053 containerd[1640]: time="2026-01-20T02:46:27.882872056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b9f0aa8a494f9d9e7199810b71439597,Namespace:kube-system,Attempt:0,} returns sandbox id \"c321302b230591fe92f1c95b2454afa8782776108ee5c461fe97a49eff68de3a\"" Jan 20 02:46:27.894619 kubelet[2576]: E0120 02:46:27.894310 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:46:27.940509 containerd[1640]: time="2026-01-20T02:46:27.939449803Z" level=info msg="CreateContainer within sandbox \"c321302b230591fe92f1c95b2454afa8782776108ee5c461fe97a49eff68de3a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 02:46:28.183729 kubelet[2576]: E0120 02:46:28.182068 2576 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.129:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 02:46:28.387036 containerd[1640]: time="2026-01-20T02:46:28.347881318Z" level=info msg="Container 111a0596ea36b9178a4997101411d8c780578559392bd6a8632c80926df747b1: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:46:28.527148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount213451181.mount: Deactivated successfully. Jan 20 02:46:28.579334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount78745250.mount: Deactivated successfully. Jan 20 02:46:28.603925 containerd[1640]: time="2026-01-20T02:46:28.603742662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"09236e09df2dbc6a03088a784da43990013854b6d9b698358b6653ef25e2fd9e\"" Jan 20 02:46:28.606748 kubelet[2576]: E0120 02:46:28.605300 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:46:28.614276 containerd[1640]: time="2026-01-20T02:46:28.611653172Z" level=info msg="Container c6193d26710dc038cf5a789ab227302418e59b3133dc3daf1ff495a39f4c497e: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:46:28.645131 containerd[1640]: time="2026-01-20T02:46:28.643278732Z" level=info msg="CreateContainer within sandbox \"09236e09df2dbc6a03088a784da43990013854b6d9b698358b6653ef25e2fd9e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 02:46:28.685463 containerd[1640]: time="2026-01-20T02:46:28.683685725Z" level=info msg="CreateContainer within sandbox \"1bbf335a2d8fc1b04c520da65e5e107490fe4f07918d46bdbf8ab1e559d6f0e4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"111a0596ea36b9178a4997101411d8c780578559392bd6a8632c80926df747b1\"" Jan 20 02:46:28.695113 containerd[1640]: time="2026-01-20T02:46:28.693625513Z" level=info msg="StartContainer for \"111a0596ea36b9178a4997101411d8c780578559392bd6a8632c80926df747b1\"" Jan 20 02:46:28.711078 containerd[1640]: time="2026-01-20T02:46:28.710629992Z" level=info msg="connecting to shim 111a0596ea36b9178a4997101411d8c780578559392bd6a8632c80926df747b1" address="unix:///run/containerd/s/5f6b874d61086bd79b841c5df95c97f90874abb23338edce3dc8fc9a59a49f4d" protocol=ttrpc version=3 Jan 20 02:46:28.729728 containerd[1640]: time="2026-01-20T02:46:28.729323854Z" level=info msg="CreateContainer within sandbox \"c321302b230591fe92f1c95b2454afa8782776108ee5c461fe97a49eff68de3a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c6193d26710dc038cf5a789ab227302418e59b3133dc3daf1ff495a39f4c497e\"" Jan 20 02:46:28.738527 containerd[1640]: time="2026-01-20T02:46:28.735847047Z" level=info msg="StartContainer for \"c6193d26710dc038cf5a789ab227302418e59b3133dc3daf1ff495a39f4c497e\"" Jan 20 02:46:28.747063 containerd[1640]: time="2026-01-20T02:46:28.746886041Z" level=info msg="connecting to shim c6193d26710dc038cf5a789ab227302418e59b3133dc3daf1ff495a39f4c497e" address="unix:///run/containerd/s/e2764ac7385c53efab5f06acfdb6c2d8c6528b375562f264850ff3db334598b1" protocol=ttrpc version=3 Jan 20 02:46:28.876364 containerd[1640]: time="2026-01-20T02:46:28.876303199Z" level=info msg="Container 0aa4bbfb1ec76e795c0b38aed05edbae55576a2573da9ca4786d7f4e0e6de8cd: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:46:29.264447 containerd[1640]: time="2026-01-20T02:46:29.263187730Z" level=info msg="CreateContainer within sandbox \"09236e09df2dbc6a03088a784da43990013854b6d9b698358b6653ef25e2fd9e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0aa4bbfb1ec76e795c0b38aed05edbae55576a2573da9ca4786d7f4e0e6de8cd\"" Jan 20 02:46:29.288511 containerd[1640]: time="2026-01-20T02:46:29.286808130Z" level=info msg="StartContainer for \"0aa4bbfb1ec76e795c0b38aed05edbae55576a2573da9ca4786d7f4e0e6de8cd\"" Jan 20 02:46:29.335048 containerd[1640]: time="2026-01-20T02:46:29.334913766Z" level=info msg="connecting to shim 0aa4bbfb1ec76e795c0b38aed05edbae55576a2573da9ca4786d7f4e0e6de8cd" address="unix:///run/containerd/s/72ad155a1002ae1e4044a0a2f01775610dc1ba87da4cc340fcaf51ebcbdcf4c8" protocol=ttrpc version=3 Jan 20 02:46:29.412296 systemd[1]: Started cri-containerd-111a0596ea36b9178a4997101411d8c780578559392bd6a8632c80926df747b1.scope - libcontainer container 111a0596ea36b9178a4997101411d8c780578559392bd6a8632c80926df747b1. Jan 20 02:46:29.434133 systemd[1]: Started cri-containerd-c6193d26710dc038cf5a789ab227302418e59b3133dc3daf1ff495a39f4c497e.scope - libcontainer container c6193d26710dc038cf5a789ab227302418e59b3133dc3daf1ff495a39f4c497e. Jan 20 02:46:30.169071 kubelet[2576]: E0120 02:46:30.168939 2576 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 02:46:30.247888 systemd[1]: Started cri-containerd-0aa4bbfb1ec76e795c0b38aed05edbae55576a2573da9ca4786d7f4e0e6de8cd.scope - libcontainer container 0aa4bbfb1ec76e795c0b38aed05edbae55576a2573da9ca4786d7f4e0e6de8cd. Jan 20 02:46:30.463000 audit: BPF prog-id=98 op=LOAD Jan 20 02:46:30.487378 kernel: kauditd_printk_skb: 56 callbacks suppressed Jan 20 02:46:30.487617 kernel: audit: type=1334 audit(1768877190.463:371): prog-id=98 op=LOAD Jan 20 02:46:30.489000 audit: BPF prog-id=99 op=LOAD Jan 20 02:46:30.507561 kernel: audit: type=1334 audit(1768877190.489:372): prog-id=99 op=LOAD Jan 20 02:46:30.565038 kernel: audit: type=1300 audit(1768877190.489:372): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2635 pid=2760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:30.576616 kernel: audit: type=1327 audit(1768877190.489:372): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336313933643236373130646330333863663561373839616232323733 Jan 20 02:46:30.489000 audit[2760]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2635 pid=2760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:30.489000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336313933643236373130646330333863663561373839616232323733 Jan 20 02:46:30.492000 audit: BPF prog-id=99 op=UNLOAD Jan 20 02:46:30.679801 kernel: audit: type=1334 audit(1768877190.492:373): prog-id=99 op=UNLOAD Jan 20 02:46:30.679894 kernel: audit: type=1300 audit(1768877190.492:373): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2635 pid=2760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:30.492000 audit[2760]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2635 pid=2760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:30.492000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336313933643236373130646330333863663561373839616232323733 Jan 20 02:46:30.788541 kernel: audit: type=1327 audit(1768877190.492:373): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336313933643236373130646330333863663561373839616232323733 Jan 20 02:46:30.818551 kernel: audit: type=1334 audit(1768877190.496:374): prog-id=100 op=LOAD Jan 20 02:46:30.818817 kernel: audit: type=1334 audit(1768877190.493:375): prog-id=101 op=LOAD Jan 20 02:46:30.818858 kernel: audit: type=1300 audit(1768877190.493:375): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2635 pid=2760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:30.496000 audit: BPF prog-id=100 op=LOAD Jan 20 02:46:30.493000 audit: BPF prog-id=101 op=LOAD Jan 20 02:46:30.493000 audit[2760]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2635 pid=2760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:30.493000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336313933643236373130646330333863663561373839616232323733 Jan 20 02:46:30.499000 audit: BPF prog-id=102 op=LOAD Jan 20 02:46:30.499000 audit[2760]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2635 pid=2760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:30.499000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336313933643236373130646330333863663561373839616232323733 Jan 20 02:46:30.500000 audit: BPF prog-id=102 op=UNLOAD Jan 20 02:46:30.500000 audit[2760]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2635 pid=2760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:30.500000 audit: BPF prog-id=103 op=LOAD Jan 20 02:46:30.500000 audit[2758]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2637 pid=2758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:30.500000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131316130353936656133366239313738613439393731303134313164 Jan 20 02:46:30.500000 audit: BPF prog-id=103 op=UNLOAD Jan 20 02:46:30.500000 audit[2758]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2637 pid=2758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:30.500000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131316130353936656133366239313738613439393731303134313164 Jan 20 02:46:30.500000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336313933643236373130646330333863663561373839616232323733 Jan 20 02:46:30.500000 audit: BPF prog-id=101 op=UNLOAD Jan 20 02:46:30.500000 audit[2760]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2635 pid=2760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:30.500000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336313933643236373130646330333863663561373839616232323733 Jan 20 02:46:30.500000 audit: BPF prog-id=104 op=LOAD Jan 20 02:46:30.500000 audit[2760]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2635 pid=2760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:30.500000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336313933643236373130646330333863663561373839616232323733 Jan 20 02:46:30.503000 audit: BPF prog-id=105 op=LOAD Jan 20 02:46:30.503000 audit[2758]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2637 pid=2758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:30.503000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131316130353936656133366239313738613439393731303134313164 Jan 20 02:46:30.504000 audit: BPF prog-id=106 op=LOAD Jan 20 02:46:30.504000 audit[2758]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2637 pid=2758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:30.504000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131316130353936656133366239313738613439393731303134313164 Jan 20 02:46:30.504000 audit: BPF prog-id=106 op=UNLOAD Jan 20 02:46:30.504000 audit[2758]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2637 pid=2758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:30.504000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131316130353936656133366239313738613439393731303134313164 Jan 20 02:46:30.504000 audit: BPF prog-id=105 op=UNLOAD Jan 20 02:46:30.504000 audit[2758]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2637 pid=2758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:30.504000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131316130353936656133366239313738613439393731303134313164 Jan 20 02:46:30.504000 audit: BPF prog-id=107 op=LOAD Jan 20 02:46:30.504000 audit[2758]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2637 pid=2758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:30.504000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131316130353936656133366239313738613439393731303134313164 Jan 20 02:46:31.128000 audit: BPF prog-id=108 op=LOAD Jan 20 02:46:31.133000 audit: BPF prog-id=109 op=LOAD Jan 20 02:46:31.133000 audit[2785]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2675 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:31.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061613462626662316563373665373935633062333861656430356564 Jan 20 02:46:31.134000 audit: BPF prog-id=109 op=UNLOAD Jan 20 02:46:31.134000 audit[2785]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2675 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:31.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061613462626662316563373665373935633062333861656430356564 Jan 20 02:46:31.134000 audit: BPF prog-id=110 op=LOAD Jan 20 02:46:31.134000 audit[2785]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2675 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:31.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061613462626662316563373665373935633062333861656430356564 Jan 20 02:46:31.134000 audit: BPF prog-id=111 op=LOAD Jan 20 02:46:31.134000 audit[2785]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2675 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:31.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061613462626662316563373665373935633062333861656430356564 Jan 20 02:46:31.134000 audit: BPF prog-id=111 op=UNLOAD Jan 20 02:46:31.134000 audit[2785]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2675 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:31.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061613462626662316563373665373935633062333861656430356564 Jan 20 02:46:31.142000 audit: BPF prog-id=110 op=UNLOAD Jan 20 02:46:31.142000 audit[2785]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2675 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:31.142000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061613462626662316563373665373935633062333861656430356564 Jan 20 02:46:31.142000 audit: BPF prog-id=112 op=LOAD Jan 20 02:46:31.142000 audit[2785]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2675 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:46:31.142000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061613462626662316563373665373935633062333861656430356564 Jan 20 02:46:32.020124 kubelet[2576]: E0120 02:46:32.012965 2576 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.129:6443: connect: connection refused" interval="7s" Jan 20 02:46:32.650711 kubelet[2576]: E0120 02:46:32.648927 2576 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.129:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 02:46:33.513646 containerd[1640]: time="2026-01-20T02:46:33.509620756Z" level=info msg="StartContainer for \"0aa4bbfb1ec76e795c0b38aed05edbae55576a2573da9ca4786d7f4e0e6de8cd\" returns successfully" Jan 20 02:46:33.984590 containerd[1640]: time="2026-01-20T02:46:33.879395058Z" level=info msg="StartContainer for \"111a0596ea36b9178a4997101411d8c780578559392bd6a8632c80926df747b1\" returns successfully" Jan 20 02:46:33.984590 containerd[1640]: time="2026-01-20T02:46:33.980127298Z" level=info msg="StartContainer for \"c6193d26710dc038cf5a789ab227302418e59b3133dc3daf1ff495a39f4c497e\" returns successfully" Jan 20 02:46:34.015899 kubelet[2576]: E0120 02:46:34.015728 2576 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.129:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 02:46:34.095055 kubelet[2576]: I0120 02:46:34.095008 2576 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:46:34.096173 kubelet[2576]: E0120 02:46:34.096055 2576 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:46:34.096330 kubelet[2576]: E0120 02:46:34.096272 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:46:34.106798 kubelet[2576]: E0120 02:46:34.106676 2576 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.129:6443/api/v1/nodes\": dial tcp 10.0.0.129:6443: connect: connection refused" node="localhost" Jan 20 02:46:34.133136 kubelet[2576]: E0120 02:46:34.128988 2576 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:46:34.133136 kubelet[2576]: E0120 02:46:34.129188 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:46:34.187920 kubelet[2576]: E0120 02:46:34.187468 2576 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:46:34.194876 kubelet[2576]: E0120 02:46:34.194796 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:46:35.221368 kubelet[2576]: E0120 02:46:35.221320 2576 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:46:35.222226 kubelet[2576]: E0120 02:46:35.222195 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:46:35.232527 kubelet[2576]: E0120 02:46:35.232387 2576 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:46:35.240925 kubelet[2576]: E0120 02:46:35.240873 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:46:35.250918 kubelet[2576]: E0120 02:46:35.250878 2576 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:46:35.251261 kubelet[2576]: E0120 02:46:35.251237 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:46:36.247059 kubelet[2576]: E0120 02:46:36.239271 2576 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:46:36.247059 kubelet[2576]: E0120 02:46:36.239677 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:46:36.247059 kubelet[2576]: E0120 02:46:36.240058 2576 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:46:36.247059 kubelet[2576]: E0120 02:46:36.240187 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:46:36.304788 kubelet[2576]: E0120 02:46:36.286013 2576 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:46:36.304788 kubelet[2576]: E0120 02:46:36.286210 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:46:40.172744 kubelet[2576]: E0120 02:46:40.172678 2576 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 02:46:40.436365 kubelet[2576]: E0120 02:46:40.432043 2576 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 20 02:46:40.614871 kubelet[2576]: E0120 02:46:40.614184 2576 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188c506c1fc379f9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 02:46:18.969356793 +0000 UTC m=+3.377476791,LastTimestamp:2026-01-20 02:46:18.969356793 +0000 UTC m=+3.377476791,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 02:46:40.759342 kubelet[2576]: E0120 02:46:40.757343 2576 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188c506c3aea4455 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 02:46:19.424883797 +0000 UTC m=+3.833003764,LastTimestamp:2026-01-20 02:46:19.424883797 +0000 UTC m=+3.833003764,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 02:46:41.124582 kubelet[2576]: I0120 02:46:41.119030 2576 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:46:41.182722 kubelet[2576]: I0120 02:46:41.181707 2576 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 02:46:41.182722 kubelet[2576]: E0120 02:46:41.182179 2576 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 20 02:46:41.286602 kubelet[2576]: E0120 02:46:41.284071 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:41.392690 kubelet[2576]: E0120 02:46:41.387004 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:41.499284 kubelet[2576]: E0120 02:46:41.490316 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:41.592582 kubelet[2576]: E0120 02:46:41.590671 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:41.703792 kubelet[2576]: E0120 02:46:41.703652 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:41.816574 kubelet[2576]: E0120 02:46:41.816447 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:41.922279 kubelet[2576]: E0120 02:46:41.920725 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:42.028639 kubelet[2576]: E0120 02:46:42.027178 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:42.133269 kubelet[2576]: E0120 02:46:42.130840 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:42.238006 kubelet[2576]: E0120 02:46:42.237940 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:42.338977 kubelet[2576]: E0120 02:46:42.338912 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:42.440150 kubelet[2576]: E0120 02:46:42.440085 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:42.541663 kubelet[2576]: E0120 02:46:42.541436 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:42.657565 kubelet[2576]: E0120 02:46:42.656977 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:42.758265 kubelet[2576]: E0120 02:46:42.757318 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:42.858391 kubelet[2576]: E0120 02:46:42.857970 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:42.960813 kubelet[2576]: E0120 02:46:42.958883 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:43.063040 kubelet[2576]: E0120 02:46:43.062871 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:43.164792 kubelet[2576]: E0120 02:46:43.164749 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:43.268155 kubelet[2576]: E0120 02:46:43.265193 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:43.366686 kubelet[2576]: E0120 02:46:43.366324 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:43.473374 kubelet[2576]: E0120 02:46:43.470311 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:43.578827 kubelet[2576]: E0120 02:46:43.575009 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:43.676688 kubelet[2576]: E0120 02:46:43.676638 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:43.780014 kubelet[2576]: E0120 02:46:43.779158 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:43.885195 kubelet[2576]: E0120 02:46:43.883601 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:43.986563 kubelet[2576]: E0120 02:46:43.986292 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:44.090457 kubelet[2576]: E0120 02:46:44.089701 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:44.207438 kubelet[2576]: E0120 02:46:44.190770 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:44.297019 kubelet[2576]: E0120 02:46:44.296835 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:44.402071 kubelet[2576]: E0120 02:46:44.401870 2576 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:46:44.405185 kubelet[2576]: E0120 02:46:44.399187 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:44.405444 kubelet[2576]: E0120 02:46:44.405418 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:46:44.541877 kubelet[2576]: E0120 02:46:44.512029 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:44.635541 kubelet[2576]: E0120 02:46:44.623216 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:44.776272 kubelet[2576]: E0120 02:46:44.769000 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:44.889235 kubelet[2576]: E0120 02:46:44.889021 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:44.990909 kubelet[2576]: E0120 02:46:44.990154 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:45.222888 kubelet[2576]: E0120 02:46:45.189265 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:45.377457 kubelet[2576]: E0120 02:46:45.294078 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:45.474034 kubelet[2576]: E0120 02:46:45.387741 2576 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:46:45.474034 kubelet[2576]: E0120 02:46:45.443639 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:46:45.574774 kubelet[2576]: E0120 02:46:45.564213 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:45.683710 kubelet[2576]: E0120 02:46:45.682732 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:45.843838 kubelet[2576]: E0120 02:46:45.828636 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:45.932941 kubelet[2576]: E0120 02:46:45.931049 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:46.074970 kubelet[2576]: E0120 02:46:46.041192 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:46.222764 kubelet[2576]: E0120 02:46:46.176772 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:46.283401 kubelet[2576]: E0120 02:46:46.281759 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:46.396776 kubelet[2576]: E0120 02:46:46.392445 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:46.511558 kubelet[2576]: E0120 02:46:46.497158 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:46.611104 kubelet[2576]: E0120 02:46:46.605855 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:46.732311 kubelet[2576]: E0120 02:46:46.727737 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:46.845449 kubelet[2576]: E0120 02:46:46.842178 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:46.945844 kubelet[2576]: E0120 02:46:46.945369 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:47.062379 kubelet[2576]: E0120 02:46:47.061134 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:47.176647 kubelet[2576]: E0120 02:46:47.167721 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:47.283389 kubelet[2576]: E0120 02:46:47.282740 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:47.393350 kubelet[2576]: E0120 02:46:47.392039 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:47.495848 kubelet[2576]: E0120 02:46:47.494773 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:47.596806 kubelet[2576]: E0120 02:46:47.596004 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:47.727669 kubelet[2576]: E0120 02:46:47.716007 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:47.981178 kubelet[2576]: E0120 02:46:47.961831 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:48.096653 kubelet[2576]: E0120 02:46:48.092197 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:48.244468 kubelet[2576]: E0120 02:46:48.197967 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:48.371091 kubelet[2576]: E0120 02:46:48.369341 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:48.712617 kubelet[2576]: E0120 02:46:48.491222 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:48.712617 kubelet[2576]: E0120 02:46:48.711691 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:49.411990 kubelet[2576]: E0120 02:46:48.895152 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:49.411990 kubelet[2576]: E0120 02:46:49.468149 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:49.844721 kubelet[2576]: E0120 02:46:49.712768 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:50.071092 kubelet[2576]: E0120 02:46:50.045884 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:50.170745 kubelet[2576]: E0120 02:46:50.148935 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:50.176572 kubelet[2576]: E0120 02:46:50.174270 2576 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 02:46:50.265718 kubelet[2576]: E0120 02:46:50.265586 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:50.387175 kubelet[2576]: E0120 02:46:50.386890 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:50.492031 kubelet[2576]: E0120 02:46:50.490867 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:50.591431 kubelet[2576]: E0120 02:46:50.591379 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:50.694866 kubelet[2576]: E0120 02:46:50.694586 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:50.802871 kubelet[2576]: E0120 02:46:50.802452 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:50.905582 kubelet[2576]: E0120 02:46:50.903883 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:51.012721 kubelet[2576]: E0120 02:46:51.007991 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:51.127009 kubelet[2576]: E0120 02:46:51.126732 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:51.229518 kubelet[2576]: E0120 02:46:51.228852 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:51.339361 kubelet[2576]: E0120 02:46:51.335378 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:51.443309 kubelet[2576]: E0120 02:46:51.440866 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:51.447300 kubelet[2576]: E0120 02:46:51.443923 2576 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 20 02:46:51.620545 kubelet[2576]: E0120 02:46:51.619902 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:51.727790 kubelet[2576]: E0120 02:46:51.720133 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:51.830621 kubelet[2576]: E0120 02:46:51.830074 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:51.944280 kubelet[2576]: E0120 02:46:51.940301 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:52.064892 kubelet[2576]: E0120 02:46:52.044743 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:52.160249 kubelet[2576]: E0120 02:46:52.152863 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:52.264740 kubelet[2576]: E0120 02:46:52.264045 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:52.366602 kubelet[2576]: E0120 02:46:52.366252 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:52.472577 kubelet[2576]: E0120 02:46:52.466835 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:52.574294 kubelet[2576]: E0120 02:46:52.572599 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:52.677777 kubelet[2576]: E0120 02:46:52.672967 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:52.788042 kubelet[2576]: E0120 02:46:52.782150 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:52.889031 kubelet[2576]: E0120 02:46:52.888365 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:53.023799 kubelet[2576]: E0120 02:46:52.997679 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:53.104151 kubelet[2576]: E0120 02:46:53.098962 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:53.217467 kubelet[2576]: E0120 02:46:53.215853 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:53.324842 kubelet[2576]: E0120 02:46:53.324127 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:53.427381 kubelet[2576]: E0120 02:46:53.425398 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:53.530421 kubelet[2576]: E0120 02:46:53.529948 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:53.663264 kubelet[2576]: E0120 02:46:53.630815 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:53.735526 kubelet[2576]: E0120 02:46:53.734849 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:53.842714 kubelet[2576]: E0120 02:46:53.842656 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:54.009439 kubelet[2576]: E0120 02:46:53.975215 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:54.080104 kubelet[2576]: E0120 02:46:54.079932 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:54.202932 kubelet[2576]: E0120 02:46:54.193872 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:54.322625 kubelet[2576]: E0120 02:46:54.314547 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:54.420876 kubelet[2576]: E0120 02:46:54.420567 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:54.608229 kubelet[2576]: E0120 02:46:54.534336 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:54.634773 kubelet[2576]: E0120 02:46:54.634718 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:54.738257 kubelet[2576]: E0120 02:46:54.737806 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:54.894672 kubelet[2576]: E0120 02:46:54.838933 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:54.980054 kubelet[2576]: E0120 02:46:54.979414 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:55.088765 kubelet[2576]: E0120 02:46:55.088311 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:55.213906 kubelet[2576]: E0120 02:46:55.195817 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:55.315144 kubelet[2576]: E0120 02:46:55.314828 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:55.420093 kubelet[2576]: E0120 02:46:55.418980 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:55.531875 kubelet[2576]: E0120 02:46:55.522234 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:55.623683 kubelet[2576]: E0120 02:46:55.623218 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:55.725395 kubelet[2576]: E0120 02:46:55.724877 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:55.834389 kubelet[2576]: E0120 02:46:55.834258 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:55.979775 kubelet[2576]: E0120 02:46:55.979302 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:56.104105 kubelet[2576]: E0120 02:46:56.094045 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:46:56.167370 kubelet[2576]: I0120 02:46:56.164185 2576 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 02:46:56.281730 kubelet[2576]: I0120 02:46:56.281082 2576 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 02:46:56.384317 kubelet[2576]: I0120 02:46:56.383022 2576 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 02:46:57.141551 kubelet[2576]: I0120 02:46:57.137200 2576 apiserver.go:52] "Watching apiserver" Jan 20 02:46:57.153278 kubelet[2576]: E0120 02:46:57.151856 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:46:57.162270 kubelet[2576]: E0120 02:46:57.154733 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:46:57.162270 kubelet[2576]: E0120 02:46:57.155861 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:46:57.162270 kubelet[2576]: I0120 02:46:57.158926 2576 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 20 02:46:58.329995 systemd[1]: Reload requested from client PID 2872 ('systemctl') (unit session-7.scope)... Jan 20 02:46:58.330046 systemd[1]: Reloading... Jan 20 02:46:59.078931 zram_generator::config[2921]: No configuration found. Jan 20 02:46:59.729551 systemd[1]: Reloading finished in 1397 ms. Jan 20 02:46:59.817708 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:46:59.833772 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 02:46:59.838180 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:46:59.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:46:59.845163 kernel: kauditd_printk_skb: 56 callbacks suppressed Jan 20 02:46:59.845263 kernel: audit: type=1131 audit(1768877219.835:395): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:46:59.850651 systemd[1]: kubelet.service: Consumed 4.821s CPU time, 128.3M memory peak. Jan 20 02:46:59.861958 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:46:59.877555 kernel: audit: type=1334 audit(1768877219.869:396): prog-id=113 op=LOAD Jan 20 02:46:59.869000 audit: BPF prog-id=113 op=LOAD Jan 20 02:46:59.869000 audit: BPF prog-id=68 op=UNLOAD Jan 20 02:46:59.890974 kernel: audit: type=1334 audit(1768877219.869:397): prog-id=68 op=UNLOAD Jan 20 02:46:59.891146 kernel: audit: type=1334 audit(1768877219.870:398): prog-id=114 op=LOAD Jan 20 02:46:59.891194 kernel: audit: type=1334 audit(1768877219.870:399): prog-id=115 op=LOAD Jan 20 02:46:59.891229 kernel: audit: type=1334 audit(1768877219.870:400): prog-id=69 op=UNLOAD Jan 20 02:46:59.891258 kernel: audit: type=1334 audit(1768877219.870:401): prog-id=70 op=UNLOAD Jan 20 02:46:59.870000 audit: BPF prog-id=114 op=LOAD Jan 20 02:46:59.870000 audit: BPF prog-id=115 op=LOAD Jan 20 02:46:59.870000 audit: BPF prog-id=69 op=UNLOAD Jan 20 02:46:59.870000 audit: BPF prog-id=70 op=UNLOAD Jan 20 02:46:59.894249 kernel: audit: type=1334 audit(1768877219.887:402): prog-id=116 op=LOAD Jan 20 02:46:59.887000 audit: BPF prog-id=116 op=LOAD Jan 20 02:46:59.887000 audit: BPF prog-id=71 op=UNLOAD Jan 20 02:46:59.890000 audit: BPF prog-id=117 op=LOAD Jan 20 02:46:59.943579 kernel: audit: type=1334 audit(1768877219.887:403): prog-id=71 op=UNLOAD Jan 20 02:46:59.943708 kernel: audit: type=1334 audit(1768877219.890:404): prog-id=117 op=LOAD Jan 20 02:46:59.890000 audit: BPF prog-id=118 op=LOAD Jan 20 02:46:59.890000 audit: BPF prog-id=72 op=UNLOAD Jan 20 02:46:59.890000 audit: BPF prog-id=73 op=UNLOAD Jan 20 02:46:59.893000 audit: BPF prog-id=119 op=LOAD Jan 20 02:46:59.893000 audit: BPF prog-id=65 op=UNLOAD Jan 20 02:46:59.894000 audit: BPF prog-id=120 op=LOAD Jan 20 02:46:59.894000 audit: BPF prog-id=121 op=LOAD Jan 20 02:46:59.894000 audit: BPF prog-id=66 op=UNLOAD Jan 20 02:46:59.894000 audit: BPF prog-id=67 op=UNLOAD Jan 20 02:46:59.896000 audit: BPF prog-id=122 op=LOAD Jan 20 02:46:59.896000 audit: BPF prog-id=64 op=UNLOAD Jan 20 02:46:59.898000 audit: BPF prog-id=123 op=LOAD Jan 20 02:46:59.898000 audit: BPF prog-id=63 op=UNLOAD Jan 20 02:46:59.900000 audit: BPF prog-id=124 op=LOAD Jan 20 02:46:59.900000 audit: BPF prog-id=125 op=LOAD Jan 20 02:46:59.900000 audit: BPF prog-id=81 op=UNLOAD Jan 20 02:46:59.900000 audit: BPF prog-id=82 op=UNLOAD Jan 20 02:46:59.902000 audit: BPF prog-id=126 op=LOAD Jan 20 02:46:59.902000 audit: BPF prog-id=74 op=UNLOAD Jan 20 02:46:59.902000 audit: BPF prog-id=127 op=LOAD Jan 20 02:46:59.902000 audit: BPF prog-id=128 op=LOAD Jan 20 02:46:59.902000 audit: BPF prog-id=75 op=UNLOAD Jan 20 02:46:59.902000 audit: BPF prog-id=76 op=UNLOAD Jan 20 02:46:59.904000 audit: BPF prog-id=129 op=LOAD Jan 20 02:46:59.948000 audit: BPF prog-id=77 op=UNLOAD Jan 20 02:46:59.953000 audit: BPF prog-id=130 op=LOAD Jan 20 02:46:59.953000 audit: BPF prog-id=78 op=UNLOAD Jan 20 02:46:59.953000 audit: BPF prog-id=131 op=LOAD Jan 20 02:46:59.953000 audit: BPF prog-id=132 op=LOAD Jan 20 02:46:59.953000 audit: BPF prog-id=79 op=UNLOAD Jan 20 02:46:59.953000 audit: BPF prog-id=80 op=UNLOAD Jan 20 02:47:00.661387 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:47:00.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:47:00.706877 (kubelet)[2963]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 02:47:01.094383 kubelet[2963]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 02:47:01.094383 kubelet[2963]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 02:47:01.094383 kubelet[2963]: I0120 02:47:01.090037 2963 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 02:47:01.123039 kubelet[2963]: I0120 02:47:01.122992 2963 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 20 02:47:01.123533 kubelet[2963]: I0120 02:47:01.123259 2963 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 02:47:01.123533 kubelet[2963]: I0120 02:47:01.123302 2963 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 20 02:47:01.123533 kubelet[2963]: I0120 02:47:01.123311 2963 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 02:47:01.123848 kubelet[2963]: I0120 02:47:01.123829 2963 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 02:47:01.154347 kubelet[2963]: I0120 02:47:01.151873 2963 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 20 02:47:01.168196 kubelet[2963]: I0120 02:47:01.167930 2963 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 02:47:01.186182 kubelet[2963]: I0120 02:47:01.186017 2963 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 02:47:01.235547 kubelet[2963]: I0120 02:47:01.234774 2963 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 20 02:47:01.237790 kubelet[2963]: I0120 02:47:01.235924 2963 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 02:47:01.237790 kubelet[2963]: I0120 02:47:01.235970 2963 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 02:47:01.237790 kubelet[2963]: I0120 02:47:01.236240 2963 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 02:47:01.237790 kubelet[2963]: I0120 02:47:01.236254 2963 container_manager_linux.go:306] "Creating device plugin manager" Jan 20 02:47:01.238176 kubelet[2963]: I0120 02:47:01.236292 2963 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 20 02:47:01.238176 kubelet[2963]: I0120 02:47:01.237386 2963 state_mem.go:36] "Initialized new in-memory state store" Jan 20 02:47:01.245886 kubelet[2963]: I0120 02:47:01.241804 2963 kubelet.go:475] "Attempting to sync node with API server" Jan 20 02:47:01.245886 kubelet[2963]: I0120 02:47:01.244737 2963 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 02:47:01.297463 kubelet[2963]: I0120 02:47:01.284461 2963 kubelet.go:387] "Adding apiserver pod source" Jan 20 02:47:01.297463 kubelet[2963]: I0120 02:47:01.284587 2963 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 02:47:01.297463 kubelet[2963]: I0120 02:47:01.287307 2963 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 02:47:01.297463 kubelet[2963]: I0120 02:47:01.287968 2963 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 02:47:01.297463 kubelet[2963]: I0120 02:47:01.288006 2963 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 20 02:47:01.377975 kubelet[2963]: I0120 02:47:01.373618 2963 server.go:1262] "Started kubelet" Jan 20 02:47:01.377975 kubelet[2963]: I0120 02:47:01.377398 2963 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 02:47:01.460935 kubelet[2963]: I0120 02:47:01.382456 2963 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 02:47:01.460935 kubelet[2963]: I0120 02:47:01.386404 2963 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 02:47:01.460935 kubelet[2963]: I0120 02:47:01.459748 2963 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 20 02:47:01.460935 kubelet[2963]: I0120 02:47:01.459991 2963 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 02:47:01.460935 kubelet[2963]: I0120 02:47:01.420772 2963 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 02:47:01.464550 kubelet[2963]: I0120 02:47:01.464047 2963 server.go:310] "Adding debug handlers to kubelet server" Jan 20 02:47:01.471632 kubelet[2963]: E0120 02:47:01.420393 2963 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 02:47:01.477032 kubelet[2963]: I0120 02:47:01.477005 2963 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 20 02:47:01.477359 kubelet[2963]: I0120 02:47:01.477334 2963 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 20 02:47:01.477693 kubelet[2963]: I0120 02:47:01.477674 2963 reconciler.go:29] "Reconciler: start to sync state" Jan 20 02:47:01.502806 kubelet[2963]: I0120 02:47:01.501783 2963 factory.go:223] Registration of the systemd container factory successfully Jan 20 02:47:01.502806 kubelet[2963]: I0120 02:47:01.502071 2963 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 02:47:01.523141 kubelet[2963]: I0120 02:47:01.523069 2963 factory.go:223] Registration of the containerd container factory successfully Jan 20 02:47:01.673733 kubelet[2963]: I0120 02:47:01.670898 2963 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 20 02:47:01.714451 kubelet[2963]: I0120 02:47:01.712693 2963 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 20 02:47:01.714451 kubelet[2963]: I0120 02:47:01.712730 2963 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 20 02:47:01.714451 kubelet[2963]: I0120 02:47:01.712836 2963 kubelet.go:2427] "Starting kubelet main sync loop" Jan 20 02:47:01.714451 kubelet[2963]: E0120 02:47:01.713057 2963 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 02:47:01.818778 kubelet[2963]: E0120 02:47:01.818733 2963 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 02:47:01.946809 kubelet[2963]: I0120 02:47:01.946698 2963 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 02:47:01.956325 kubelet[2963]: I0120 02:47:01.950745 2963 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 02:47:01.956325 kubelet[2963]: I0120 02:47:01.950790 2963 state_mem.go:36] "Initialized new in-memory state store" Jan 20 02:47:01.956325 kubelet[2963]: I0120 02:47:01.950983 2963 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 02:47:01.956325 kubelet[2963]: I0120 02:47:01.951000 2963 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 02:47:01.956325 kubelet[2963]: I0120 02:47:01.951022 2963 policy_none.go:49] "None policy: Start" Jan 20 02:47:01.956325 kubelet[2963]: I0120 02:47:01.951035 2963 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 20 02:47:01.956325 kubelet[2963]: I0120 02:47:01.951050 2963 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 20 02:47:01.956325 kubelet[2963]: I0120 02:47:01.951230 2963 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 20 02:47:01.956325 kubelet[2963]: I0120 02:47:01.951247 2963 policy_none.go:47] "Start" Jan 20 02:47:02.002219 kubelet[2963]: E0120 02:47:02.001623 2963 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 02:47:02.002219 kubelet[2963]: I0120 02:47:02.001886 2963 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 02:47:02.002219 kubelet[2963]: I0120 02:47:02.001901 2963 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 02:47:02.002697 kubelet[2963]: I0120 02:47:02.002680 2963 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 02:47:02.024459 kubelet[2963]: E0120 02:47:02.024422 2963 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 02:47:02.045686 kubelet[2963]: I0120 02:47:02.045648 2963 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 02:47:02.080970 kubelet[2963]: I0120 02:47:02.056720 2963 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 02:47:02.093378 kubelet[2963]: I0120 02:47:02.090019 2963 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 02:47:02.101687 kubelet[2963]: I0120 02:47:02.101643 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:47:02.110752 kubelet[2963]: I0120 02:47:02.110716 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 20 02:47:02.110920 kubelet[2963]: I0120 02:47:02.110898 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b9f0aa8a494f9d9e7199810b71439597-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b9f0aa8a494f9d9e7199810b71439597\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:47:02.117736 kubelet[2963]: I0120 02:47:02.116363 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b9f0aa8a494f9d9e7199810b71439597-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b9f0aa8a494f9d9e7199810b71439597\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:47:02.145681 kubelet[2963]: I0120 02:47:02.145626 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b9f0aa8a494f9d9e7199810b71439597-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b9f0aa8a494f9d9e7199810b71439597\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:47:02.145922 kubelet[2963]: I0120 02:47:02.145899 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:47:02.146021 kubelet[2963]: I0120 02:47:02.146001 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:47:02.164733 kubelet[2963]: I0120 02:47:02.164694 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:47:02.165027 kubelet[2963]: I0120 02:47:02.164999 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:47:02.185684 kubelet[2963]: E0120 02:47:02.159065 2963 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 20 02:47:02.265202 kubelet[2963]: I0120 02:47:02.261013 2963 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:47:02.293173 kubelet[2963]: E0120 02:47:02.290765 2963 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 20 02:47:02.293173 kubelet[2963]: I0120 02:47:02.291189 2963 apiserver.go:52] "Watching apiserver" Jan 20 02:47:02.318417 kubelet[2963]: E0120 02:47:02.315943 2963 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 20 02:47:02.318417 kubelet[2963]: E0120 02:47:02.316291 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:47:02.382858 kubelet[2963]: I0120 02:47:02.382769 2963 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 20 02:47:02.451925 kubelet[2963]: I0120 02:47:02.450425 2963 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 20 02:47:02.471846 kubelet[2963]: I0120 02:47:02.471702 2963 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 02:47:02.497838 kubelet[2963]: E0120 02:47:02.495904 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:47:02.601826 kubelet[2963]: E0120 02:47:02.597821 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:47:02.702269 kubelet[2963]: I0120 02:47:02.702018 2963 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=6.701997187 podStartE2EDuration="6.701997187s" podCreationTimestamp="2026-01-20 02:46:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:47:02.696901799 +0000 UTC m=+1.951968267" watchObservedRunningTime="2026-01-20 02:47:02.701997187 +0000 UTC m=+1.957063765" Jan 20 02:47:02.949962 kubelet[2963]: E0120 02:47:02.949215 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:47:02.953559 kubelet[2963]: E0120 02:47:02.953412 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:47:02.966467 kubelet[2963]: I0120 02:47:02.963796 2963 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 02:47:03.090756 kubelet[2963]: I0120 02:47:03.089320 2963 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=7.089296467 podStartE2EDuration="7.089296467s" podCreationTimestamp="2026-01-20 02:46:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:47:02.967802375 +0000 UTC m=+2.222868924" watchObservedRunningTime="2026-01-20 02:47:03.089296467 +0000 UTC m=+2.344362915" Jan 20 02:47:03.131058 kubelet[2963]: E0120 02:47:03.131012 2963 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 20 02:47:03.134357 kubelet[2963]: E0120 02:47:03.134288 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:47:03.198815 kubelet[2963]: I0120 02:47:03.198620 2963 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 02:47:03.203175 containerd[1640]: time="2026-01-20T02:47:03.201984462Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 02:47:03.208062 kubelet[2963]: I0120 02:47:03.204644 2963 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 02:47:03.271632 kubelet[2963]: I0120 02:47:03.259688 2963 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=7.259668555 podStartE2EDuration="7.259668555s" podCreationTimestamp="2026-01-20 02:46:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:47:03.090270974 +0000 UTC m=+2.345337432" watchObservedRunningTime="2026-01-20 02:47:03.259668555 +0000 UTC m=+2.514735013" Jan 20 02:47:03.724712 kubelet[2963]: I0120 02:47:03.723688 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/799d02e4-e040-4444-9ac4-bf266dbc2188-xtables-lock\") pod \"kube-proxy-z2mhx\" (UID: \"799d02e4-e040-4444-9ac4-bf266dbc2188\") " pod="kube-system/kube-proxy-z2mhx" Jan 20 02:47:03.724712 kubelet[2963]: I0120 02:47:03.723765 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/799d02e4-e040-4444-9ac4-bf266dbc2188-lib-modules\") pod \"kube-proxy-z2mhx\" (UID: \"799d02e4-e040-4444-9ac4-bf266dbc2188\") " pod="kube-system/kube-proxy-z2mhx" Jan 20 02:47:03.724712 kubelet[2963]: I0120 02:47:03.723793 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mnlz\" (UniqueName: \"kubernetes.io/projected/799d02e4-e040-4444-9ac4-bf266dbc2188-kube-api-access-2mnlz\") pod \"kube-proxy-z2mhx\" (UID: \"799d02e4-e040-4444-9ac4-bf266dbc2188\") " pod="kube-system/kube-proxy-z2mhx" Jan 20 02:47:03.724712 kubelet[2963]: I0120 02:47:03.723825 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/799d02e4-e040-4444-9ac4-bf266dbc2188-kube-proxy\") pod \"kube-proxy-z2mhx\" (UID: \"799d02e4-e040-4444-9ac4-bf266dbc2188\") " pod="kube-system/kube-proxy-z2mhx" Jan 20 02:47:03.859452 kubelet[2963]: E0120 02:47:03.856350 2963 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: object "kube-system"/"kube-proxy" not registered Jan 20 02:47:03.859452 kubelet[2963]: E0120 02:47:03.856469 2963 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/799d02e4-e040-4444-9ac4-bf266dbc2188-kube-proxy podName:799d02e4-e040-4444-9ac4-bf266dbc2188 nodeName:}" failed. No retries permitted until 2026-01-20 02:47:04.356437537 +0000 UTC m=+3.611503985 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/799d02e4-e040-4444-9ac4-bf266dbc2188-kube-proxy") pod "kube-proxy-z2mhx" (UID: "799d02e4-e040-4444-9ac4-bf266dbc2188") : object "kube-system"/"kube-proxy" not registered Jan 20 02:47:03.948305 systemd[1]: Created slice kubepods-besteffort-pod799d02e4_e040_4444_9ac4_bf266dbc2188.slice - libcontainer container kubepods-besteffort-pod799d02e4_e040_4444_9ac4_bf266dbc2188.slice. Jan 20 02:47:03.987962 kubelet[2963]: E0120 02:47:03.981900 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:47:04.020976 kubelet[2963]: E0120 02:47:04.020940 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:47:04.028263 kubelet[2963]: E0120 02:47:04.027985 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:47:04.618752 kubelet[2963]: E0120 02:47:04.617921 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:47:04.657832 containerd[1640]: time="2026-01-20T02:47:04.657759207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z2mhx,Uid:799d02e4-e040-4444-9ac4-bf266dbc2188,Namespace:kube-system,Attempt:0,}" Jan 20 02:47:04.934742 containerd[1640]: time="2026-01-20T02:47:04.934291677Z" level=info msg="connecting to shim c5c95fcc76eb8039275040d68fd1614a401594a24cfd6060a8cfb556eaa29a8d" address="unix:///run/containerd/s/813c820dbe35d5fdf2508b6a443ce11dbe90725e39bab292992d002071f81e68" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:47:04.985272 kubelet[2963]: E0120 02:47:04.985207 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:47:04.987664 kubelet[2963]: E0120 02:47:04.985792 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:47:05.310267 systemd[1]: Started cri-containerd-c5c95fcc76eb8039275040d68fd1614a401594a24cfd6060a8cfb556eaa29a8d.scope - libcontainer container c5c95fcc76eb8039275040d68fd1614a401594a24cfd6060a8cfb556eaa29a8d. Jan 20 02:47:05.412000 audit: BPF prog-id=133 op=LOAD Jan 20 02:47:05.433861 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 20 02:47:05.433988 kernel: audit: type=1334 audit(1768877225.412:437): prog-id=133 op=LOAD Jan 20 02:47:05.437000 audit: BPF prog-id=134 op=LOAD Jan 20 02:47:05.437000 audit[3031]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3019 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:05.486631 kernel: audit: type=1334 audit(1768877225.437:438): prog-id=134 op=LOAD Jan 20 02:47:05.486868 kernel: audit: type=1300 audit(1768877225.437:438): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3019 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:05.437000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335633935666363373665623830333932373530343064363866643136 Jan 20 02:47:05.437000 audit: BPF prog-id=134 op=UNLOAD Jan 20 02:47:05.527729 kernel: audit: type=1327 audit(1768877225.437:438): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335633935666363373665623830333932373530343064363866643136 Jan 20 02:47:05.527780 kernel: audit: type=1334 audit(1768877225.437:439): prog-id=134 op=UNLOAD Jan 20 02:47:05.527808 kernel: audit: type=1300 audit(1768877225.437:439): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3019 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:05.437000 audit[3031]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3019 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:05.437000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335633935666363373665623830333932373530343064363866643136 Jan 20 02:47:05.583676 kernel: audit: type=1327 audit(1768877225.437:439): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335633935666363373665623830333932373530343064363866643136 Jan 20 02:47:05.437000 audit: BPF prog-id=135 op=LOAD Jan 20 02:47:05.592688 kernel: audit: type=1334 audit(1768877225.437:440): prog-id=135 op=LOAD Jan 20 02:47:05.437000 audit[3031]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3019 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:05.635750 kernel: audit: type=1300 audit(1768877225.437:440): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3019 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:05.437000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335633935666363373665623830333932373530343064363866643136 Jan 20 02:47:05.669744 kernel: audit: type=1327 audit(1768877225.437:440): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335633935666363373665623830333932373530343064363866643136 Jan 20 02:47:05.437000 audit: BPF prog-id=136 op=LOAD Jan 20 02:47:05.437000 audit[3031]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3019 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:05.437000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335633935666363373665623830333932373530343064363866643136 Jan 20 02:47:05.437000 audit: BPF prog-id=136 op=UNLOAD Jan 20 02:47:05.437000 audit[3031]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3019 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:05.437000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335633935666363373665623830333932373530343064363866643136 Jan 20 02:47:05.437000 audit: BPF prog-id=135 op=UNLOAD Jan 20 02:47:05.437000 audit[3031]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3019 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:05.437000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335633935666363373665623830333932373530343064363866643136 Jan 20 02:47:05.438000 audit: BPF prog-id=137 op=LOAD Jan 20 02:47:05.438000 audit[3031]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3019 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:05.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335633935666363373665623830333932373530343064363866643136 Jan 20 02:47:05.682685 containerd[1640]: time="2026-01-20T02:47:05.682627383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z2mhx,Uid:799d02e4-e040-4444-9ac4-bf266dbc2188,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5c95fcc76eb8039275040d68fd1614a401594a24cfd6060a8cfb556eaa29a8d\"" Jan 20 02:47:05.690016 kubelet[2963]: E0120 02:47:05.687034 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:47:05.788264 containerd[1640]: time="2026-01-20T02:47:05.785968876Z" level=info msg="CreateContainer within sandbox \"c5c95fcc76eb8039275040d68fd1614a401594a24cfd6060a8cfb556eaa29a8d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 02:47:05.910897 containerd[1640]: time="2026-01-20T02:47:05.910758948Z" level=info msg="Container d29426439cde74651a2fba305498616ef2e55deff96ffce2d4987e099ad33031: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:47:06.015622 containerd[1640]: time="2026-01-20T02:47:06.013274133Z" level=info msg="CreateContainer within sandbox \"c5c95fcc76eb8039275040d68fd1614a401594a24cfd6060a8cfb556eaa29a8d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d29426439cde74651a2fba305498616ef2e55deff96ffce2d4987e099ad33031\"" Jan 20 02:47:06.028288 containerd[1640]: time="2026-01-20T02:47:06.022920951Z" level=info msg="StartContainer for \"d29426439cde74651a2fba305498616ef2e55deff96ffce2d4987e099ad33031\"" Jan 20 02:47:06.036282 containerd[1640]: time="2026-01-20T02:47:06.032758926Z" level=info msg="connecting to shim d29426439cde74651a2fba305498616ef2e55deff96ffce2d4987e099ad33031" address="unix:///run/containerd/s/813c820dbe35d5fdf2508b6a443ce11dbe90725e39bab292992d002071f81e68" protocol=ttrpc version=3 Jan 20 02:47:06.037370 kubelet[2963]: E0120 02:47:06.037340 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:47:06.206820 systemd[1]: Started cri-containerd-d29426439cde74651a2fba305498616ef2e55deff96ffce2d4987e099ad33031.scope - libcontainer container d29426439cde74651a2fba305498616ef2e55deff96ffce2d4987e099ad33031. Jan 20 02:47:06.605000 audit: BPF prog-id=138 op=LOAD Jan 20 02:47:06.605000 audit[3057]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3019 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:06.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432393432363433396364653734363531613266626133303534393836 Jan 20 02:47:06.605000 audit: BPF prog-id=139 op=LOAD Jan 20 02:47:06.605000 audit[3057]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3019 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:06.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432393432363433396364653734363531613266626133303534393836 Jan 20 02:47:06.605000 audit: BPF prog-id=139 op=UNLOAD Jan 20 02:47:06.605000 audit[3057]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3019 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:06.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432393432363433396364653734363531613266626133303534393836 Jan 20 02:47:06.605000 audit: BPF prog-id=138 op=UNLOAD Jan 20 02:47:06.605000 audit[3057]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3019 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:06.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432393432363433396364653734363531613266626133303534393836 Jan 20 02:47:06.605000 audit: BPF prog-id=140 op=LOAD Jan 20 02:47:06.605000 audit[3057]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3019 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:06.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432393432363433396364653734363531613266626133303534393836 Jan 20 02:47:06.751225 kubelet[2963]: I0120 02:47:06.750718 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8c86da12-4a84-4698-8c7c-31a4cd7ea667-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-4qw7r\" (UID: \"8c86da12-4a84-4698-8c7c-31a4cd7ea667\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-4qw7r" Jan 20 02:47:06.752073 kubelet[2963]: I0120 02:47:06.751252 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42wmr\" (UniqueName: \"kubernetes.io/projected/8c86da12-4a84-4698-8c7c-31a4cd7ea667-kube-api-access-42wmr\") pod \"tigera-operator-65cdcdfd6d-4qw7r\" (UID: \"8c86da12-4a84-4698-8c7c-31a4cd7ea667\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-4qw7r" Jan 20 02:47:06.775773 systemd[1]: Created slice kubepods-besteffort-pod8c86da12_4a84_4698_8c7c_31a4cd7ea667.slice - libcontainer container kubepods-besteffort-pod8c86da12_4a84_4698_8c7c_31a4cd7ea667.slice. Jan 20 02:47:06.814442 containerd[1640]: time="2026-01-20T02:47:06.814372799Z" level=info msg="StartContainer for \"d29426439cde74651a2fba305498616ef2e55deff96ffce2d4987e099ad33031\" returns successfully" Jan 20 02:47:07.051888 kubelet[2963]: E0120 02:47:07.049989 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:47:07.113352 containerd[1640]: time="2026-01-20T02:47:07.112926279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-4qw7r,Uid:8c86da12-4a84-4698-8c7c-31a4cd7ea667,Namespace:tigera-operator,Attempt:0,}" Jan 20 02:47:07.292206 containerd[1640]: time="2026-01-20T02:47:07.291626671Z" level=info msg="connecting to shim 7ad8c8d83c5a5c008027c4ec1ed414cde3bcf3d2da3e9b92cfa1f01674939ed8" address="unix:///run/containerd/s/12e870e9b08ffb85a937518ca1e5727435392214ef192bae8c840132d15a2b99" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:47:07.443653 systemd[1]: Started cri-containerd-7ad8c8d83c5a5c008027c4ec1ed414cde3bcf3d2da3e9b92cfa1f01674939ed8.scope - libcontainer container 7ad8c8d83c5a5c008027c4ec1ed414cde3bcf3d2da3e9b92cfa1f01674939ed8. Jan 20 02:47:07.588000 audit: BPF prog-id=141 op=LOAD Jan 20 02:47:07.602000 audit: BPF prog-id=142 op=LOAD Jan 20 02:47:07.602000 audit[3121]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=3110 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:07.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761643863386438336335613563303038303237633465633165643431 Jan 20 02:47:07.605000 audit: BPF prog-id=142 op=UNLOAD Jan 20 02:47:07.605000 audit[3121]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3110 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:07.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761643863386438336335613563303038303237633465633165643431 Jan 20 02:47:07.605000 audit: BPF prog-id=143 op=LOAD Jan 20 02:47:07.605000 audit[3121]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3110 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:07.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761643863386438336335613563303038303237633465633165643431 Jan 20 02:47:07.605000 audit: BPF prog-id=144 op=LOAD Jan 20 02:47:07.605000 audit[3121]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3110 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:07.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761643863386438336335613563303038303237633465633165643431 Jan 20 02:47:07.605000 audit: BPF prog-id=144 op=UNLOAD Jan 20 02:47:07.605000 audit[3121]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3110 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:07.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761643863386438336335613563303038303237633465633165643431 Jan 20 02:47:07.605000 audit: BPF prog-id=143 op=UNLOAD Jan 20 02:47:07.605000 audit[3121]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3110 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:07.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761643863386438336335613563303038303237633465633165643431 Jan 20 02:47:07.605000 audit: BPF prog-id=145 op=LOAD Jan 20 02:47:07.605000 audit[3121]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3110 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:07.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761643863386438336335613563303038303237633465633165643431 Jan 20 02:47:07.851000 audit[3169]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3169 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:47:07.851000 audit[3169]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff2cc73380 a2=0 a3=7fff2cc7336c items=0 ppid=3069 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:07.851000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 20 02:47:07.921000 audit[3173]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=3173 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:47:07.921000 audit[3173]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc75e982d0 a2=0 a3=7ffc75e982bc items=0 ppid=3069 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:07.921000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 20 02:47:07.921000 audit[3174]: NETFILTER_CFG table=filter:56 family=10 entries=1 op=nft_register_chain pid=3174 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:47:07.921000 audit[3174]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeddb9eff0 a2=0 a3=7ffeddb9efdc items=0 ppid=3069 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:07.921000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 20 02:47:07.937000 audit[3167]: NETFILTER_CFG table=mangle:57 family=2 entries=1 op=nft_register_chain pid=3167 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:47:07.937000 audit[3167]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcf85e9f30 a2=0 a3=7ffcf85e9f1c items=0 ppid=3069 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:07.937000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 20 02:47:07.976000 audit[3180]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3180 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:47:07.976000 audit[3180]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc099eb190 a2=0 a3=7ffc099eb17c items=0 ppid=3069 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:07.976000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 20 02:47:08.020000 audit[3181]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3181 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:47:08.020000 audit[3181]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff1866ca10 a2=0 a3=7fff1866c9fc items=0 ppid=3069 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:08.020000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 20 02:47:08.034000 audit[3184]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3184 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:47:08.034000 audit[3184]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffedb0de410 a2=0 a3=7ffedb0de3fc items=0 ppid=3069 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:08.034000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 20 02:47:08.050063 containerd[1640]: time="2026-01-20T02:47:08.049314973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-4qw7r,Uid:8c86da12-4a84-4698-8c7c-31a4cd7ea667,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7ad8c8d83c5a5c008027c4ec1ed414cde3bcf3d2da3e9b92cfa1f01674939ed8\"" Jan 20 02:47:08.074644 containerd[1640]: time="2026-01-20T02:47:08.069730714Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 20 02:47:08.084000 audit[3186]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3186 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:47:08.084000 audit[3186]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd6c6f1150 a2=0 a3=7ffd6c6f113c items=0 ppid=3069 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:08.084000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Jan 20 02:47:08.102087 kubelet[2963]: E0120 02:47:08.101423 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:47:08.116000 audit[3189]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3189 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:47:08.116000 audit[3189]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd5da1bda0 a2=0 a3=7ffd5da1bd8c items=0 ppid=3069 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:08.116000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 20 02:47:08.158000 audit[3190]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3190 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:47:08.158000 audit[3190]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe4697c050 a2=0 a3=7ffe4697c03c items=0 ppid=3069 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:08.158000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 20 02:47:08.221000 audit[3192]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3192 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:47:08.221000 audit[3192]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff03dbf620 a2=0 a3=7fff03dbf60c items=0 ppid=3069 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:08.221000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 20 02:47:08.233000 audit[3193]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3193 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:47:08.233000 audit[3193]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe7d876470 a2=0 a3=7ffe7d87645c items=0 ppid=3069 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:08.233000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 20 02:47:08.251000 audit[3195]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3195 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:47:08.251000 audit[3195]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd446e9150 a2=0 a3=7ffd446e913c items=0 ppid=3069 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:08.251000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 02:47:08.290000 audit[3198]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3198 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:47:08.290000 audit[3198]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcbf086d50 a2=0 a3=7ffcbf086d3c items=0 ppid=3069 pid=3198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:08.290000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 02:47:08.305000 audit[3199]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3199 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:47:08.305000 audit[3199]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce4c3be20 a2=0 a3=7ffce4c3be0c items=0 ppid=3069 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:08.305000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 20 02:47:08.344000 audit[3201]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3201 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:47:08.344000 audit[3201]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdc6a55520 a2=0 a3=7ffdc6a5550c items=0 ppid=3069 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:08.344000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 20 02:47:08.363000 audit[3202]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3202 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:47:08.363000 audit[3202]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc615b0620 a2=0 a3=7ffc615b060c items=0 ppid=3069 pid=3202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:08.363000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 20 02:47:08.399000 audit[3204]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3204 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:47:08.399000 audit[3204]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe22b698b0 a2=0 a3=7ffe22b6989c items=0 ppid=3069 pid=3204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:08.399000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Jan 20 02:47:08.431000 audit[3207]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3207 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:47:08.431000 audit[3207]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc4f8b38c0 a2=0 a3=7ffc4f8b38ac items=0 ppid=3069 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:08.431000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 20 02:47:08.453000 audit[3210]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3210 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:47:08.453000 audit[3210]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc5abe6330 a2=0 a3=7ffc5abe631c items=0 ppid=3069 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:08.453000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 20 02:47:08.462000 audit[3211]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3211 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:47:08.462000 audit[3211]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe6867e180 a2=0 a3=7ffe6867e16c items=0 ppid=3069 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:08.462000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 20 02:47:08.490000 audit[3213]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3213 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:47:08.490000 audit[3213]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff20e45970 a2=0 a3=7fff20e4595c items=0 ppid=3069 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:08.490000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 02:47:08.537000 audit[3216]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3216 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:47:08.537000 audit[3216]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffb813b1e0 a2=0 a3=7fffb813b1cc items=0 ppid=3069 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:08.537000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 02:47:08.554000 audit[3217]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3217 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:47:08.554000 audit[3217]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd66d8f610 a2=0 a3=7ffd66d8f5fc items=0 ppid=3069 pid=3217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:08.554000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 20 02:47:08.570000 audit[3219]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3219 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:47:08.570000 audit[3219]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffe1bc74b70 a2=0 a3=7ffe1bc74b5c items=0 ppid=3069 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:08.570000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 20 02:47:08.921000 audit[3225]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3225 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:47:08.921000 audit[3225]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc090a8480 a2=0 a3=7ffc090a846c items=0 ppid=3069 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:08.921000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:47:09.000000 audit[3225]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3225 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:47:09.000000 audit[3225]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffc090a8480 a2=0 a3=7ffc090a846c items=0 ppid=3069 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:09.000000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:47:09.014000 audit[3230]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3230 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:47:09.014000 audit[3230]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffecc8539f0 a2=0 a3=7ffecc8539dc items=0 ppid=3069 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:09.014000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 20 02:47:09.035000 audit[3232]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3232 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:47:09.035000 audit[3232]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd8360eeb0 a2=0 a3=7ffd8360ee9c items=0 ppid=3069 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:09.035000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 20 02:47:09.059000 audit[3235]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3235 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:47:09.059000 audit[3235]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff71903a90 a2=0 a3=7fff71903a7c items=0 ppid=3069 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:09.059000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Jan 20 02:47:09.078000 audit[3236]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3236 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:47:09.078000 audit[3236]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffebe0cc420 a2=0 a3=7ffebe0cc40c items=0 ppid=3069 pid=3236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:09.078000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 20 02:47:09.102000 audit[3238]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3238 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:47:09.102000 audit[3238]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffeb97a4c10 a2=0 a3=7ffeb97a4bfc items=0 ppid=3069 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:09.102000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 20 02:47:09.111000 audit[3239]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3239 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:47:09.111000 audit[3239]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff50da0280 a2=0 a3=7fff50da026c items=0 ppid=3069 pid=3239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:09.111000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 20 02:47:09.149000 audit[3241]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3241 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:47:09.149000 audit[3241]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffecc5d5fc0 a2=0 a3=7ffecc5d5fac items=0 ppid=3069 pid=3241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:09.149000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 02:47:09.204000 audit[3244]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3244 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:47:09.204000 audit[3244]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffda3670bc0 a2=0 a3=7ffda3670bac items=0 ppid=3069 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:09.204000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 02:47:09.224000 audit[3245]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3245 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:47:09.224000 audit[3245]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc4aedcf70 a2=0 a3=7ffc4aedcf5c items=0 ppid=3069 pid=3245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:09.224000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 20 02:47:09.266000 audit[3247]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3247 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:47:09.266000 audit[3247]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffea766c220 a2=0 a3=7ffea766c20c items=0 ppid=3069 pid=3247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:09.266000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 20 02:47:09.278000 audit[3248]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3248 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:47:09.278000 audit[3248]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe9b8bbd50 a2=0 a3=7ffe9b8bbd3c items=0 ppid=3069 pid=3248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:09.278000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 20 02:47:09.296000 audit[3250]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3250 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:47:09.296000 audit[3250]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcda389f50 a2=0 a3=7ffcda389f3c items=0 ppid=3069 pid=3250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:09.296000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 20 02:47:09.391000 audit[3253]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3253 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:47:09.391000 audit[3253]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff325f55e0 a2=0 a3=7fff325f55cc items=0 ppid=3069 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:09.391000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 20 02:47:09.457000 audit[3256]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3256 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:47:09.457000 audit[3256]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffddacc5920 a2=0 a3=7ffddacc590c items=0 ppid=3069 pid=3256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:09.457000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Jan 20 02:47:09.475000 audit[3257]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3257 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:47:09.475000 audit[3257]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffed961d8e0 a2=0 a3=7ffed961d8cc items=0 ppid=3069 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:09.475000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 20 02:47:09.526000 audit[3259]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3259 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:47:09.526000 audit[3259]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffebc1245b0 a2=0 a3=7ffebc12459c items=0 ppid=3069 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:09.526000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 02:47:09.582000 audit[3262]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3262 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:47:09.582000 audit[3262]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffeb03a350 a2=0 a3=7fffeb03a33c items=0 ppid=3069 pid=3262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:09.582000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 02:47:09.583000 audit[3263]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3263 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:47:09.583000 audit[3263]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8ac2ee70 a2=0 a3=7ffc8ac2ee5c items=0 ppid=3069 pid=3263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:09.583000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 20 02:47:09.611000 audit[3265]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3265 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:47:09.611000 audit[3265]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd76cb1d00 a2=0 a3=7ffd76cb1cec items=0 ppid=3069 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:09.611000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 20 02:47:09.621000 audit[3266]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3266 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:47:09.621000 audit[3266]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd644bc3f0 a2=0 a3=7ffd644bc3dc items=0 ppid=3069 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:09.621000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 20 02:47:09.665000 audit[3268]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3268 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:47:09.665000 audit[3268]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd28416c80 a2=0 a3=7ffd28416c6c items=0 ppid=3069 pid=3268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:09.665000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 02:47:09.709000 audit[3271]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3271 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:47:09.709000 audit[3271]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe223de540 a2=0 a3=7ffe223de52c items=0 ppid=3069 pid=3271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:09.709000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 02:47:09.747000 audit[3273]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3273 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 20 02:47:09.747000 audit[3273]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffcba084bb0 a2=0 a3=7ffcba084b9c items=0 ppid=3069 pid=3273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:09.747000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:47:09.758000 audit[3273]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3273 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 20 02:47:09.758000 audit[3273]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffcba084bb0 a2=0 a3=7ffcba084b9c items=0 ppid=3069 pid=3273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:09.758000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:47:09.888645 kubelet[2963]: E0120 02:47:09.882053 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:47:09.996749 kubelet[2963]: I0120 02:47:09.996307 2963 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z2mhx" podStartSLOduration=6.996281879 podStartE2EDuration="6.996281879s" podCreationTimestamp="2026-01-20 02:47:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:47:07.162899454 +0000 UTC m=+6.417965942" watchObservedRunningTime="2026-01-20 02:47:09.996281879 +0000 UTC m=+9.251348327" Jan 20 02:47:10.099221 kubelet[2963]: E0120 02:47:10.099128 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:47:10.193736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount330901109.mount: Deactivated successfully. Jan 20 02:47:11.992211 kubelet[2963]: E0120 02:47:11.985910 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:47:19.063394 containerd[1640]: time="2026-01-20T02:47:19.063335062Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:47:19.078035 containerd[1640]: time="2026-01-20T02:47:19.077979420Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25052948" Jan 20 02:47:19.085640 containerd[1640]: time="2026-01-20T02:47:19.083957331Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:47:19.108680 containerd[1640]: time="2026-01-20T02:47:19.104734406Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:47:19.112338 containerd[1640]: time="2026-01-20T02:47:19.111678589Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 11.041893053s" Jan 20 02:47:19.112338 containerd[1640]: time="2026-01-20T02:47:19.111741515Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 20 02:47:19.188430 containerd[1640]: time="2026-01-20T02:47:19.186963765Z" level=info msg="CreateContainer within sandbox \"7ad8c8d83c5a5c008027c4ec1ed414cde3bcf3d2da3e9b92cfa1f01674939ed8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 20 02:47:19.292974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3525195567.mount: Deactivated successfully. Jan 20 02:47:19.345563 containerd[1640]: time="2026-01-20T02:47:19.344440762Z" level=info msg="Container 9452fb7f2424b07d171c13b7de7c83ab9e9a52d46b3e5e336dc8375081d9b656: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:47:19.379270 containerd[1640]: time="2026-01-20T02:47:19.378354573Z" level=info msg="CreateContainer within sandbox \"7ad8c8d83c5a5c008027c4ec1ed414cde3bcf3d2da3e9b92cfa1f01674939ed8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9452fb7f2424b07d171c13b7de7c83ab9e9a52d46b3e5e336dc8375081d9b656\"" Jan 20 02:47:19.396186 containerd[1640]: time="2026-01-20T02:47:19.393780924Z" level=info msg="StartContainer for \"9452fb7f2424b07d171c13b7de7c83ab9e9a52d46b3e5e336dc8375081d9b656\"" Jan 20 02:47:19.403907 containerd[1640]: time="2026-01-20T02:47:19.403065070Z" level=info msg="connecting to shim 9452fb7f2424b07d171c13b7de7c83ab9e9a52d46b3e5e336dc8375081d9b656" address="unix:///run/containerd/s/12e870e9b08ffb85a937518ca1e5727435392214ef192bae8c840132d15a2b99" protocol=ttrpc version=3 Jan 20 02:47:19.489334 systemd[1]: Started cri-containerd-9452fb7f2424b07d171c13b7de7c83ab9e9a52d46b3e5e336dc8375081d9b656.scope - libcontainer container 9452fb7f2424b07d171c13b7de7c83ab9e9a52d46b3e5e336dc8375081d9b656. Jan 20 02:47:19.646281 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 20 02:47:19.646583 kernel: audit: type=1334 audit(1768877239.640:509): prog-id=146 op=LOAD Jan 20 02:47:19.640000 audit: BPF prog-id=146 op=LOAD Jan 20 02:47:19.669972 kernel: audit: type=1334 audit(1768877239.658:510): prog-id=147 op=LOAD Jan 20 02:47:19.658000 audit: BPF prog-id=147 op=LOAD Jan 20 02:47:19.658000 audit[3286]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3110 pid=3286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:19.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934353266623766323432346230376431373163313362376465376338 Jan 20 02:47:19.801545 kernel: audit: type=1300 audit(1768877239.658:510): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3110 pid=3286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:19.801690 kernel: audit: type=1327 audit(1768877239.658:510): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934353266623766323432346230376431373163313362376465376338 Jan 20 02:47:19.658000 audit: BPF prog-id=147 op=UNLOAD Jan 20 02:47:19.658000 audit[3286]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3110 pid=3286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:19.873258 kernel: audit: type=1334 audit(1768877239.658:511): prog-id=147 op=UNLOAD Jan 20 02:47:19.873433 kernel: audit: type=1300 audit(1768877239.658:511): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3110 pid=3286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:19.873569 kernel: audit: type=1327 audit(1768877239.658:511): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934353266623766323432346230376431373163313362376465376338 Jan 20 02:47:19.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934353266623766323432346230376431373163313362376465376338 Jan 20 02:47:19.658000 audit: BPF prog-id=148 op=LOAD Jan 20 02:47:19.658000 audit[3286]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3110 pid=3286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:19.972802 kernel: audit: type=1334 audit(1768877239.658:512): prog-id=148 op=LOAD Jan 20 02:47:19.977250 kernel: audit: type=1300 audit(1768877239.658:512): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3110 pid=3286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:19.977459 kernel: audit: type=1327 audit(1768877239.658:512): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934353266623766323432346230376431373163313362376465376338 Jan 20 02:47:19.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934353266623766323432346230376431373163313362376465376338 Jan 20 02:47:19.985425 containerd[1640]: time="2026-01-20T02:47:19.983555290Z" level=info msg="StartContainer for \"9452fb7f2424b07d171c13b7de7c83ab9e9a52d46b3e5e336dc8375081d9b656\" returns successfully" Jan 20 02:47:19.658000 audit: BPF prog-id=149 op=LOAD Jan 20 02:47:19.658000 audit[3286]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3110 pid=3286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:19.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934353266623766323432346230376431373163313362376465376338 Jan 20 02:47:19.658000 audit: BPF prog-id=149 op=UNLOAD Jan 20 02:47:19.658000 audit[3286]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3110 pid=3286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:19.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934353266623766323432346230376431373163313362376465376338 Jan 20 02:47:19.658000 audit: BPF prog-id=148 op=UNLOAD Jan 20 02:47:19.658000 audit[3286]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3110 pid=3286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:19.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934353266623766323432346230376431373163313362376465376338 Jan 20 02:47:19.658000 audit: BPF prog-id=150 op=LOAD Jan 20 02:47:19.658000 audit[3286]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3110 pid=3286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:19.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934353266623766323432346230376431373163313362376465376338 Jan 20 02:47:21.123023 kubelet[2963]: I0120 02:47:21.113429 2963 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-4qw7r" podStartSLOduration=4.05003116 podStartE2EDuration="15.113384657s" podCreationTimestamp="2026-01-20 02:47:06 +0000 UTC" firstStartedPulling="2026-01-20 02:47:08.063732196 +0000 UTC m=+7.318798644" lastFinishedPulling="2026-01-20 02:47:19.127085694 +0000 UTC m=+18.382152141" observedRunningTime="2026-01-20 02:47:21.083903135 +0000 UTC m=+20.338969613" watchObservedRunningTime="2026-01-20 02:47:21.113384657 +0000 UTC m=+20.368451104" Jan 20 02:47:38.582527 sudo[1838]: pam_unix(sudo:session): session closed for user root Jan 20 02:47:38.581000 audit[1838]: USER_END pid=1838 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 02:47:38.595788 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 20 02:47:38.595906 kernel: audit: type=1106 audit(1768877258.581:517): pid=1838 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 02:47:38.604056 sshd[1837]: Connection closed by 10.0.0.1 port 56020 Jan 20 02:47:38.610983 sshd-session[1834]: pam_unix(sshd:session): session closed for user core Jan 20 02:47:38.625039 systemd-logind[1612]: Session 7 logged out. Waiting for processes to exit. Jan 20 02:47:38.631224 systemd[1]: sshd@6-10.0.0.129:22-10.0.0.1:56020.service: Deactivated successfully. Jan 20 02:47:38.584000 audit[1838]: CRED_DISP pid=1838 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 02:47:38.642324 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 02:47:38.645259 systemd[1]: session-7.scope: Consumed 13.252s CPU time, 224.9M memory peak. Jan 20 02:47:38.668448 kernel: audit: type=1104 audit(1768877258.584:518): pid=1838 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 02:47:38.668635 kernel: audit: type=1106 audit(1768877258.613:519): pid=1834 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:47:38.613000 audit[1834]: USER_END pid=1834 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:47:38.675891 systemd-logind[1612]: Removed session 7. Jan 20 02:47:38.697960 kernel: audit: type=1104 audit(1768877258.613:520): pid=1834 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:47:38.613000 audit[1834]: CRED_DISP pid=1834 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:47:38.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.129:22-10.0.0.1:56020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:47:38.758894 kernel: audit: type=1131 audit(1768877258.632:521): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.129:22-10.0.0.1:56020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:47:46.033000 audit[3375]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3375 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:47:46.083372 kernel: audit: type=1325 audit(1768877266.033:522): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3375 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:47:46.033000 audit[3375]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffeaa4a15e0 a2=0 a3=7ffeaa4a15cc items=0 ppid=3069 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:46.154224 kernel: audit: type=1300 audit(1768877266.033:522): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffeaa4a15e0 a2=0 a3=7ffeaa4a15cc items=0 ppid=3069 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:46.033000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:47:46.091000 audit[3375]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3375 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:47:46.199743 kernel: audit: type=1327 audit(1768877266.033:522): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:47:46.199886 kernel: audit: type=1325 audit(1768877266.091:523): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3375 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:47:46.199952 kernel: audit: type=1300 audit(1768877266.091:523): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffeaa4a15e0 a2=0 a3=0 items=0 ppid=3069 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:46.091000 audit[3375]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffeaa4a15e0 a2=0 a3=0 items=0 ppid=3069 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:46.251879 kernel: audit: type=1327 audit(1768877266.091:523): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:47:46.091000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:47:47.293000 audit[3377]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3377 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:47:47.333565 kernel: audit: type=1325 audit(1768877267.293:524): table=filter:107 family=2 entries=16 op=nft_register_rule pid=3377 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:47:47.293000 audit[3377]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff725806e0 a2=0 a3=7fff725806cc items=0 ppid=3069 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:47.456117 kernel: audit: type=1300 audit(1768877267.293:524): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff725806e0 a2=0 a3=7fff725806cc items=0 ppid=3069 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:47.456322 kernel: audit: type=1327 audit(1768877267.293:524): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:47:47.293000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:47:47.340000 audit[3377]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3377 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:47:47.340000 audit[3377]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff725806e0 a2=0 a3=0 items=0 ppid=3069 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:47:47.340000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:47:47.484723 kernel: audit: type=1325 audit(1768877267.340:525): table=nat:108 family=2 entries=12 op=nft_register_rule pid=3377 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:48:02.851000 audit[3381]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3381 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:48:02.863896 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 20 02:48:02.864030 kernel: audit: type=1325 audit(1768877282.851:526): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3381 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:48:02.851000 audit[3381]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd95165c30 a2=0 a3=7ffd95165c1c items=0 ppid=3069 pid=3381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:02.997989 kernel: audit: type=1300 audit(1768877282.851:526): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd95165c30 a2=0 a3=7ffd95165c1c items=0 ppid=3069 pid=3381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:02.998160 kernel: audit: type=1327 audit(1768877282.851:526): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:48:02.851000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:48:02.929000 audit[3381]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3381 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:48:02.929000 audit[3381]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd95165c30 a2=0 a3=0 items=0 ppid=3069 pid=3381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:03.089851 kernel: audit: type=1325 audit(1768877282.929:527): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3381 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:48:03.090003 kernel: audit: type=1300 audit(1768877282.929:527): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd95165c30 a2=0 a3=0 items=0 ppid=3069 pid=3381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:03.090064 kernel: audit: type=1327 audit(1768877282.929:527): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:48:02.929000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:48:04.296000 audit[3383]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3383 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:48:04.340090 kernel: audit: type=1325 audit(1768877284.296:528): table=filter:111 family=2 entries=19 op=nft_register_rule pid=3383 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:48:04.296000 audit[3383]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdc8864150 a2=0 a3=7ffdc886413c items=0 ppid=3069 pid=3383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:04.395730 kernel: audit: type=1300 audit(1768877284.296:528): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdc8864150 a2=0 a3=7ffdc886413c items=0 ppid=3069 pid=3383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:04.296000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:48:04.443612 kernel: audit: type=1327 audit(1768877284.296:528): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:48:04.478000 audit[3383]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3383 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:48:04.517461 kernel: audit: type=1325 audit(1768877284.478:529): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3383 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:48:04.478000 audit[3383]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdc8864150 a2=0 a3=0 items=0 ppid=3069 pid=3383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:04.478000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:48:10.714870 kubelet[2963]: E0120 02:48:10.714751 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:48:12.575021 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 20 02:48:12.575244 kernel: audit: type=1325 audit(1768877292.541:530): table=filter:113 family=2 entries=21 op=nft_register_rule pid=3387 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:48:12.541000 audit[3387]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3387 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:48:12.657780 kernel: audit: type=1300 audit(1768877292.541:530): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd648371e0 a2=0 a3=7ffd648371cc items=0 ppid=3069 pid=3387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:12.541000 audit[3387]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd648371e0 a2=0 a3=7ffd648371cc items=0 ppid=3069 pid=3387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:12.692033 kernel: audit: type=1327 audit(1768877292.541:530): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:48:12.541000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:48:12.723000 audit[3387]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3387 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:48:12.723000 audit[3387]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd648371e0 a2=0 a3=0 items=0 ppid=3069 pid=3387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:12.805547 kernel: audit: type=1325 audit(1768877292.723:531): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3387 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:48:12.822359 kernel: audit: type=1300 audit(1768877292.723:531): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd648371e0 a2=0 a3=0 items=0 ppid=3069 pid=3387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:12.823728 kernel: audit: type=1327 audit(1768877292.723:531): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:48:12.723000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:48:13.075382 systemd[1]: Created slice kubepods-besteffort-pode2e1047e_5b1c_492c_8688_f19f69b40fdd.slice - libcontainer container kubepods-besteffort-pode2e1047e_5b1c_492c_8688_f19f69b40fdd.slice. Jan 20 02:48:13.105718 kubelet[2963]: I0120 02:48:13.101977 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2e1047e-5b1c-492c-8688-f19f69b40fdd-tigera-ca-bundle\") pod \"calico-typha-574bcf7955-mxxqg\" (UID: \"e2e1047e-5b1c-492c-8688-f19f69b40fdd\") " pod="calico-system/calico-typha-574bcf7955-mxxqg" Jan 20 02:48:13.105718 kubelet[2963]: I0120 02:48:13.102265 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e2e1047e-5b1c-492c-8688-f19f69b40fdd-typha-certs\") pod \"calico-typha-574bcf7955-mxxqg\" (UID: \"e2e1047e-5b1c-492c-8688-f19f69b40fdd\") " pod="calico-system/calico-typha-574bcf7955-mxxqg" Jan 20 02:48:13.137642 kubelet[2963]: I0120 02:48:13.132543 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpnwh\" (UniqueName: \"kubernetes.io/projected/e2e1047e-5b1c-492c-8688-f19f69b40fdd-kube-api-access-cpnwh\") pod \"calico-typha-574bcf7955-mxxqg\" (UID: \"e2e1047e-5b1c-492c-8688-f19f69b40fdd\") " pod="calico-system/calico-typha-574bcf7955-mxxqg" Jan 20 02:48:13.808256 kubelet[2963]: E0120 02:48:13.808215 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:48:13.834297 containerd[1640]: time="2026-01-20T02:48:13.834059634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-574bcf7955-mxxqg,Uid:e2e1047e-5b1c-492c-8688-f19f69b40fdd,Namespace:calico-system,Attempt:0,}" Jan 20 02:48:13.866574 systemd[1]: Created slice kubepods-besteffort-poda3cb52d9_fd35_4ad9_b7f4_65800e46454f.slice - libcontainer container kubepods-besteffort-poda3cb52d9_fd35_4ad9_b7f4_65800e46454f.slice. Jan 20 02:48:13.891702 kubelet[2963]: I0120 02:48:13.876115 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gfqh\" (UniqueName: \"kubernetes.io/projected/a3cb52d9-fd35-4ad9-b7f4-65800e46454f-kube-api-access-9gfqh\") pod \"calico-node-qft95\" (UID: \"a3cb52d9-fd35-4ad9-b7f4-65800e46454f\") " pod="calico-system/calico-node-qft95" Jan 20 02:48:13.891702 kubelet[2963]: I0120 02:48:13.876165 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a3cb52d9-fd35-4ad9-b7f4-65800e46454f-cni-log-dir\") pod \"calico-node-qft95\" (UID: \"a3cb52d9-fd35-4ad9-b7f4-65800e46454f\") " pod="calico-system/calico-node-qft95" Jan 20 02:48:13.891702 kubelet[2963]: I0120 02:48:13.876194 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a3cb52d9-fd35-4ad9-b7f4-65800e46454f-flexvol-driver-host\") pod \"calico-node-qft95\" (UID: \"a3cb52d9-fd35-4ad9-b7f4-65800e46454f\") " pod="calico-system/calico-node-qft95" Jan 20 02:48:13.891702 kubelet[2963]: I0120 02:48:13.876218 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3cb52d9-fd35-4ad9-b7f4-65800e46454f-lib-modules\") pod \"calico-node-qft95\" (UID: \"a3cb52d9-fd35-4ad9-b7f4-65800e46454f\") " pod="calico-system/calico-node-qft95" Jan 20 02:48:13.891702 kubelet[2963]: I0120 02:48:13.876242 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a3cb52d9-fd35-4ad9-b7f4-65800e46454f-var-run-calico\") pod \"calico-node-qft95\" (UID: \"a3cb52d9-fd35-4ad9-b7f4-65800e46454f\") " pod="calico-system/calico-node-qft95" Jan 20 02:48:13.891992 kubelet[2963]: I0120 02:48:13.876266 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a3cb52d9-fd35-4ad9-b7f4-65800e46454f-cni-bin-dir\") pod \"calico-node-qft95\" (UID: \"a3cb52d9-fd35-4ad9-b7f4-65800e46454f\") " pod="calico-system/calico-node-qft95" Jan 20 02:48:13.891992 kubelet[2963]: I0120 02:48:13.876285 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a3cb52d9-fd35-4ad9-b7f4-65800e46454f-policysync\") pod \"calico-node-qft95\" (UID: \"a3cb52d9-fd35-4ad9-b7f4-65800e46454f\") " pod="calico-system/calico-node-qft95" Jan 20 02:48:13.891992 kubelet[2963]: I0120 02:48:13.876305 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3cb52d9-fd35-4ad9-b7f4-65800e46454f-tigera-ca-bundle\") pod \"calico-node-qft95\" (UID: \"a3cb52d9-fd35-4ad9-b7f4-65800e46454f\") " pod="calico-system/calico-node-qft95" Jan 20 02:48:13.891992 kubelet[2963]: I0120 02:48:13.876330 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a3cb52d9-fd35-4ad9-b7f4-65800e46454f-var-lib-calico\") pod \"calico-node-qft95\" (UID: \"a3cb52d9-fd35-4ad9-b7f4-65800e46454f\") " pod="calico-system/calico-node-qft95" Jan 20 02:48:13.891992 kubelet[2963]: I0120 02:48:13.876353 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3cb52d9-fd35-4ad9-b7f4-65800e46454f-xtables-lock\") pod \"calico-node-qft95\" (UID: \"a3cb52d9-fd35-4ad9-b7f4-65800e46454f\") " pod="calico-system/calico-node-qft95" Jan 20 02:48:13.892237 kubelet[2963]: I0120 02:48:13.876380 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a3cb52d9-fd35-4ad9-b7f4-65800e46454f-cni-net-dir\") pod \"calico-node-qft95\" (UID: \"a3cb52d9-fd35-4ad9-b7f4-65800e46454f\") " pod="calico-system/calico-node-qft95" Jan 20 02:48:13.892237 kubelet[2963]: I0120 02:48:13.876407 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a3cb52d9-fd35-4ad9-b7f4-65800e46454f-node-certs\") pod \"calico-node-qft95\" (UID: \"a3cb52d9-fd35-4ad9-b7f4-65800e46454f\") " pod="calico-system/calico-node-qft95" Jan 20 02:48:13.908000 audit[3391]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=3391 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:48:13.944545 kernel: audit: type=1325 audit(1768877293.908:532): table=filter:115 family=2 entries=22 op=nft_register_rule pid=3391 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:48:13.944707 kernel: audit: type=1300 audit(1768877293.908:532): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff78741720 a2=0 a3=7fff7874170c items=0 ppid=3069 pid=3391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:13.908000 audit[3391]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff78741720 a2=0 a3=7fff7874170c items=0 ppid=3069 pid=3391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:13.994923 kubelet[2963]: E0120 02:48:13.994825 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:13.995338 kubelet[2963]: W0120 02:48:13.995254 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:13.996579 kubelet[2963]: E0120 02:48:13.995295 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:13.908000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:48:14.038944 kernel: audit: type=1327 audit(1768877293.908:532): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:48:14.024000 audit[3391]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3391 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:48:14.045688 kubelet[2963]: E0120 02:48:14.042052 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.045688 kubelet[2963]: W0120 02:48:14.042103 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.045688 kubelet[2963]: E0120 02:48:14.042134 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.076634 kernel: audit: type=1325 audit(1768877294.024:533): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3391 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:48:14.024000 audit[3391]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff78741720 a2=0 a3=0 items=0 ppid=3069 pid=3391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:14.024000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:48:14.082050 kubelet[2963]: E0120 02:48:14.081925 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.082050 kubelet[2963]: W0120 02:48:14.081953 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.082050 kubelet[2963]: E0120 02:48:14.081984 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.092841 kubelet[2963]: E0120 02:48:14.082336 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:48:14.146667 kubelet[2963]: E0120 02:48:14.140258 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.146667 kubelet[2963]: W0120 02:48:14.140290 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.146667 kubelet[2963]: E0120 02:48:14.140320 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.152689 kubelet[2963]: E0120 02:48:14.148850 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.152689 kubelet[2963]: W0120 02:48:14.152391 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.154395 kubelet[2963]: E0120 02:48:14.153752 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.171555 kubelet[2963]: E0120 02:48:14.171416 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.176796 kubelet[2963]: W0120 02:48:14.176636 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.176796 kubelet[2963]: E0120 02:48:14.176690 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.179964 kubelet[2963]: E0120 02:48:14.179818 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.179964 kubelet[2963]: W0120 02:48:14.179845 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.179964 kubelet[2963]: E0120 02:48:14.179872 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.182245 kubelet[2963]: E0120 02:48:14.182070 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.182245 kubelet[2963]: W0120 02:48:14.182094 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.182245 kubelet[2963]: E0120 02:48:14.182118 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.190675 kubelet[2963]: E0120 02:48:14.186358 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.190675 kubelet[2963]: W0120 02:48:14.190315 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.202320 kubelet[2963]: E0120 02:48:14.201561 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.209947 kubelet[2963]: E0120 02:48:14.209709 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.209947 kubelet[2963]: W0120 02:48:14.209751 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.209947 kubelet[2963]: E0120 02:48:14.209782 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.221799 kubelet[2963]: E0120 02:48:14.221702 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.225204 kubelet[2963]: W0120 02:48:14.225164 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.225412 kubelet[2963]: E0120 02:48:14.225388 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.240727 kubelet[2963]: E0120 02:48:14.240677 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.241166 kubelet[2963]: W0120 02:48:14.240905 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.241166 kubelet[2963]: E0120 02:48:14.240950 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.241892 kubelet[2963]: E0120 02:48:14.241781 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.241892 kubelet[2963]: W0120 02:48:14.241802 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.241892 kubelet[2963]: E0120 02:48:14.241823 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.249108 containerd[1640]: time="2026-01-20T02:48:14.244762123Z" level=info msg="connecting to shim 9b11c0bb3f90598896829d6bcc1ca00a8def946677b2d9231b5bc29e2d25ee37" address="unix:///run/containerd/s/c2780ba81041a898e81d06cc5773130620d56659570f934a33003fe825545e94" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:48:14.264056 kubelet[2963]: E0120 02:48:14.254623 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.264056 kubelet[2963]: W0120 02:48:14.254658 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.264056 kubelet[2963]: E0120 02:48:14.254687 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.264056 kubelet[2963]: E0120 02:48:14.260724 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.264056 kubelet[2963]: W0120 02:48:14.260749 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.264056 kubelet[2963]: E0120 02:48:14.260828 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.299649 kubelet[2963]: E0120 02:48:14.293026 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:48:14.299649 kubelet[2963]: E0120 02:48:14.294983 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.299649 kubelet[2963]: W0120 02:48:14.295073 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.299649 kubelet[2963]: E0120 02:48:14.295101 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.299649 kubelet[2963]: E0120 02:48:14.299148 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.299649 kubelet[2963]: W0120 02:48:14.299168 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.299649 kubelet[2963]: E0120 02:48:14.299192 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.299985 containerd[1640]: time="2026-01-20T02:48:14.293808967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qft95,Uid:a3cb52d9-fd35-4ad9-b7f4-65800e46454f,Namespace:calico-system,Attempt:0,}" Jan 20 02:48:14.305295 kubelet[2963]: E0120 02:48:14.305258 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.305806 kubelet[2963]: W0120 02:48:14.305540 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.305806 kubelet[2963]: E0120 02:48:14.305578 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.311231 kubelet[2963]: E0120 02:48:14.311198 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.311650 kubelet[2963]: W0120 02:48:14.311340 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.311650 kubelet[2963]: E0120 02:48:14.311392 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.313016 kubelet[2963]: E0120 02:48:14.312854 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.313016 kubelet[2963]: W0120 02:48:14.312878 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.313016 kubelet[2963]: E0120 02:48:14.312901 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.320633 kubelet[2963]: E0120 02:48:14.320594 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.320993 kubelet[2963]: W0120 02:48:14.320776 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.320993 kubelet[2963]: E0120 02:48:14.320811 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.324716 kubelet[2963]: E0120 02:48:14.324687 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.324862 kubelet[2963]: W0120 02:48:14.324839 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.324943 kubelet[2963]: E0120 02:48:14.324927 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.385614 kubelet[2963]: E0120 02:48:14.385031 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.385614 kubelet[2963]: W0120 02:48:14.385067 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.385614 kubelet[2963]: E0120 02:48:14.385095 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.393189 kubelet[2963]: E0120 02:48:14.393154 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.394601 kubelet[2963]: W0120 02:48:14.394238 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.394601 kubelet[2963]: E0120 02:48:14.394282 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.394601 kubelet[2963]: I0120 02:48:14.394323 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2beb3373-3a79-403b-953d-80d6dc35b793-varrun\") pod \"csi-node-driver-zb7gt\" (UID: \"2beb3373-3a79-403b-953d-80d6dc35b793\") " pod="calico-system/csi-node-driver-zb7gt" Jan 20 02:48:14.402775 kubelet[2963]: E0120 02:48:14.402730 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.402948 kubelet[2963]: W0120 02:48:14.402920 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.403053 kubelet[2963]: E0120 02:48:14.403032 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.403215 kubelet[2963]: I0120 02:48:14.403195 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2beb3373-3a79-403b-953d-80d6dc35b793-kubelet-dir\") pod \"csi-node-driver-zb7gt\" (UID: \"2beb3373-3a79-403b-953d-80d6dc35b793\") " pod="calico-system/csi-node-driver-zb7gt" Jan 20 02:48:14.406423 kubelet[2963]: E0120 02:48:14.406395 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.406751 kubelet[2963]: W0120 02:48:14.406643 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.407013 kubelet[2963]: E0120 02:48:14.406901 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.410044 kubelet[2963]: E0120 02:48:14.409122 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.410044 kubelet[2963]: W0120 02:48:14.409145 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.410044 kubelet[2963]: E0120 02:48:14.409166 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.410044 kubelet[2963]: I0120 02:48:14.410016 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2beb3373-3a79-403b-953d-80d6dc35b793-registration-dir\") pod \"csi-node-driver-zb7gt\" (UID: \"2beb3373-3a79-403b-953d-80d6dc35b793\") " pod="calico-system/csi-node-driver-zb7gt" Jan 20 02:48:14.452797 kubelet[2963]: E0120 02:48:14.452753 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.453169 kubelet[2963]: W0120 02:48:14.452951 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.453169 kubelet[2963]: E0120 02:48:14.452986 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.465025 kubelet[2963]: E0120 02:48:14.464978 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.465255 kubelet[2963]: W0120 02:48:14.465224 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.465378 kubelet[2963]: E0120 02:48:14.465352 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.474796 kubelet[2963]: E0120 02:48:14.471778 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.475378 kubelet[2963]: W0120 02:48:14.475344 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.481376 kubelet[2963]: E0120 02:48:14.475584 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.481376 kubelet[2963]: E0120 02:48:14.482249 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.481376 kubelet[2963]: W0120 02:48:14.482270 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.481376 kubelet[2963]: E0120 02:48:14.482293 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.491395 kubelet[2963]: E0120 02:48:14.491364 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.491988 kubelet[2963]: W0120 02:48:14.491964 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.497760 kubelet[2963]: E0120 02:48:14.495362 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.506871 kubelet[2963]: I0120 02:48:14.506829 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmx25\" (UniqueName: \"kubernetes.io/projected/2beb3373-3a79-403b-953d-80d6dc35b793-kube-api-access-xmx25\") pod \"csi-node-driver-zb7gt\" (UID: \"2beb3373-3a79-403b-953d-80d6dc35b793\") " pod="calico-system/csi-node-driver-zb7gt" Jan 20 02:48:14.515575 kubelet[2963]: E0120 02:48:14.513689 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.515575 kubelet[2963]: W0120 02:48:14.513819 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.515575 kubelet[2963]: E0120 02:48:14.513848 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.515575 kubelet[2963]: E0120 02:48:14.515235 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.515575 kubelet[2963]: W0120 02:48:14.515251 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.515575 kubelet[2963]: E0120 02:48:14.515270 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.525671 kubelet[2963]: E0120 02:48:14.525634 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.525849 kubelet[2963]: W0120 02:48:14.525824 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.525940 kubelet[2963]: E0120 02:48:14.525924 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.526066 kubelet[2963]: I0120 02:48:14.526049 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2beb3373-3a79-403b-953d-80d6dc35b793-socket-dir\") pod \"csi-node-driver-zb7gt\" (UID: \"2beb3373-3a79-403b-953d-80d6dc35b793\") " pod="calico-system/csi-node-driver-zb7gt" Jan 20 02:48:14.526618 systemd[1]: Started cri-containerd-9b11c0bb3f90598896829d6bcc1ca00a8def946677b2d9231b5bc29e2d25ee37.scope - libcontainer container 9b11c0bb3f90598896829d6bcc1ca00a8def946677b2d9231b5bc29e2d25ee37. Jan 20 02:48:14.533754 kubelet[2963]: E0120 02:48:14.533721 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.533882 kubelet[2963]: W0120 02:48:14.533864 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.533991 kubelet[2963]: E0120 02:48:14.533972 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.537808 kubelet[2963]: E0120 02:48:14.537784 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.538039 kubelet[2963]: W0120 02:48:14.538015 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.538217 kubelet[2963]: E0120 02:48:14.538193 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.543159 kubelet[2963]: E0120 02:48:14.543134 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.547513 containerd[1640]: time="2026-01-20T02:48:14.544907271Z" level=info msg="connecting to shim fe22b359ac4f72c1a466351f8b644dda9b776debaec609a0d098fc799125cb73" address="unix:///run/containerd/s/d8ec0a28bcc384d55232d12ce135ed8167c4dec58bcf840747662ab19c1f5f8f" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:48:14.547666 kubelet[2963]: W0120 02:48:14.547614 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.547771 kubelet[2963]: E0120 02:48:14.547750 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.642994 kubelet[2963]: E0120 02:48:14.642853 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.643184 kubelet[2963]: W0120 02:48:14.643159 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.643299 kubelet[2963]: E0120 02:48:14.643278 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.665868 kubelet[2963]: E0120 02:48:14.665828 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.666054 kubelet[2963]: W0120 02:48:14.666027 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.666199 kubelet[2963]: E0120 02:48:14.666175 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.667789 kubelet[2963]: E0120 02:48:14.667767 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.667896 kubelet[2963]: W0120 02:48:14.667877 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.667981 kubelet[2963]: E0120 02:48:14.667965 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.668554 kubelet[2963]: E0120 02:48:14.668534 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.668647 kubelet[2963]: W0120 02:48:14.668628 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.668754 kubelet[2963]: E0120 02:48:14.668738 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.669265 kubelet[2963]: E0120 02:48:14.669249 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.669367 kubelet[2963]: W0120 02:48:14.669350 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.671182 kubelet[2963]: E0120 02:48:14.671154 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.673057 kubelet[2963]: E0120 02:48:14.673041 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.673158 kubelet[2963]: W0120 02:48:14.673142 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.673230 kubelet[2963]: E0120 02:48:14.673216 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.675405 kubelet[2963]: E0120 02:48:14.675384 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.676041 kubelet[2963]: W0120 02:48:14.676018 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.676144 kubelet[2963]: E0120 02:48:14.676125 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.677918 kubelet[2963]: E0120 02:48:14.677899 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.678016 kubelet[2963]: W0120 02:48:14.677997 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.678100 kubelet[2963]: E0120 02:48:14.678080 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.679648 kubelet[2963]: E0120 02:48:14.679630 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.679738 kubelet[2963]: W0120 02:48:14.679719 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.679854 kubelet[2963]: E0120 02:48:14.679833 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.680310 kubelet[2963]: E0120 02:48:14.680292 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.680392 kubelet[2963]: W0120 02:48:14.680375 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.685186 kubelet[2963]: E0120 02:48:14.685155 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.710048 kubelet[2963]: E0120 02:48:14.710004 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.710263 kubelet[2963]: W0120 02:48:14.710232 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.710382 kubelet[2963]: E0120 02:48:14.710364 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.711200 kubelet[2963]: E0120 02:48:14.711180 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.711287 kubelet[2963]: W0120 02:48:14.711271 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.711367 kubelet[2963]: E0120 02:48:14.711352 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.721989 kubelet[2963]: E0120 02:48:14.721953 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.722157 kubelet[2963]: W0120 02:48:14.722134 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.722290 kubelet[2963]: E0120 02:48:14.722264 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.725003 kubelet[2963]: E0120 02:48:14.724982 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.725097 kubelet[2963]: W0120 02:48:14.725079 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.725177 kubelet[2963]: E0120 02:48:14.725160 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.725600 kubelet[2963]: E0120 02:48:14.725581 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.728175 kubelet[2963]: W0120 02:48:14.727931 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.728331 kubelet[2963]: E0120 02:48:14.728308 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.722000 audit: BPF prog-id=151 op=LOAD Jan 20 02:48:14.738000 audit: BPF prog-id=152 op=LOAD Jan 20 02:48:14.738000 audit[3445]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3417 pid=3445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:14.738000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962313163306262336639303539383839363832396436626363316361 Jan 20 02:48:14.739000 audit: BPF prog-id=152 op=UNLOAD Jan 20 02:48:14.739000 audit[3445]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3417 pid=3445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:14.739000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962313163306262336639303539383839363832396436626363316361 Jan 20 02:48:14.743000 audit: BPF prog-id=153 op=LOAD Jan 20 02:48:14.743000 audit[3445]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3417 pid=3445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:14.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962313163306262336639303539383839363832396436626363316361 Jan 20 02:48:14.743000 audit: BPF prog-id=154 op=LOAD Jan 20 02:48:14.743000 audit[3445]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3417 pid=3445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:14.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962313163306262336639303539383839363832396436626363316361 Jan 20 02:48:14.743000 audit: BPF prog-id=154 op=UNLOAD Jan 20 02:48:14.743000 audit[3445]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3417 pid=3445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:14.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962313163306262336639303539383839363832396436626363316361 Jan 20 02:48:14.743000 audit: BPF prog-id=153 op=UNLOAD Jan 20 02:48:14.743000 audit[3445]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3417 pid=3445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:14.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962313163306262336639303539383839363832396436626363316361 Jan 20 02:48:14.743000 audit: BPF prog-id=155 op=LOAD Jan 20 02:48:14.743000 audit[3445]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3417 pid=3445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:14.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962313163306262336639303539383839363832396436626363316361 Jan 20 02:48:14.751671 kubelet[2963]: E0120 02:48:14.740870 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.751671 kubelet[2963]: W0120 02:48:14.743090 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.751671 kubelet[2963]: E0120 02:48:14.743123 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.757521 kubelet[2963]: E0120 02:48:14.756874 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.757521 kubelet[2963]: W0120 02:48:14.756900 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.757521 kubelet[2963]: E0120 02:48:14.756927 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.759643 kubelet[2963]: E0120 02:48:14.759567 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.759643 kubelet[2963]: W0120 02:48:14.759590 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.759643 kubelet[2963]: E0120 02:48:14.759616 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.764525 kubelet[2963]: E0120 02:48:14.764327 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.764525 kubelet[2963]: W0120 02:48:14.764354 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.764525 kubelet[2963]: E0120 02:48:14.764377 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.770803 kubelet[2963]: E0120 02:48:14.770723 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.770803 kubelet[2963]: W0120 02:48:14.770793 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.770973 kubelet[2963]: E0120 02:48:14.770824 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.777247 kubelet[2963]: E0120 02:48:14.776652 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.777247 kubelet[2963]: W0120 02:48:14.776678 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.777247 kubelet[2963]: E0120 02:48:14.776706 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.784740 kubelet[2963]: E0120 02:48:14.784665 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.784740 kubelet[2963]: W0120 02:48:14.784720 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.784924 kubelet[2963]: E0120 02:48:14.784755 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.787938 kubelet[2963]: E0120 02:48:14.786745 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.787938 kubelet[2963]: W0120 02:48:14.786761 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.787938 kubelet[2963]: E0120 02:48:14.786780 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.790754 kubelet[2963]: E0120 02:48:14.790683 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.790754 kubelet[2963]: W0120 02:48:14.790705 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.790754 kubelet[2963]: E0120 02:48:14.790728 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.798600 kubelet[2963]: E0120 02:48:14.798557 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.802089 kubelet[2963]: W0120 02:48:14.798774 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.802089 kubelet[2963]: E0120 02:48:14.798813 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.839874 systemd[1]: Started cri-containerd-fe22b359ac4f72c1a466351f8b644dda9b776debaec609a0d098fc799125cb73.scope - libcontainer container fe22b359ac4f72c1a466351f8b644dda9b776debaec609a0d098fc799125cb73. Jan 20 02:48:14.909649 kubelet[2963]: E0120 02:48:14.907733 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:14.909649 kubelet[2963]: W0120 02:48:14.907784 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:14.909649 kubelet[2963]: E0120 02:48:14.907815 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:14.960000 audit: BPF prog-id=156 op=LOAD Jan 20 02:48:14.964000 audit: BPF prog-id=157 op=LOAD Jan 20 02:48:14.964000 audit[3502]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3481 pid=3502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:14.964000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665323262333539616334663732633161343636333531663862363434 Jan 20 02:48:14.965000 audit: BPF prog-id=157 op=UNLOAD Jan 20 02:48:14.965000 audit[3502]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3481 pid=3502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:14.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665323262333539616334663732633161343636333531663862363434 Jan 20 02:48:14.965000 audit: BPF prog-id=158 op=LOAD Jan 20 02:48:14.965000 audit[3502]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3481 pid=3502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:14.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665323262333539616334663732633161343636333531663862363434 Jan 20 02:48:14.968000 audit: BPF prog-id=159 op=LOAD Jan 20 02:48:14.968000 audit[3502]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3481 pid=3502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:14.968000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665323262333539616334663732633161343636333531663862363434 Jan 20 02:48:14.968000 audit: BPF prog-id=159 op=UNLOAD Jan 20 02:48:14.968000 audit[3502]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3481 pid=3502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:14.968000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665323262333539616334663732633161343636333531663862363434 Jan 20 02:48:14.968000 audit: BPF prog-id=158 op=UNLOAD Jan 20 02:48:14.968000 audit[3502]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3481 pid=3502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:14.968000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665323262333539616334663732633161343636333531663862363434 Jan 20 02:48:14.968000 audit: BPF prog-id=160 op=LOAD Jan 20 02:48:14.968000 audit[3502]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3481 pid=3502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:14.968000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665323262333539616334663732633161343636333531663862363434 Jan 20 02:48:15.116961 containerd[1640]: time="2026-01-20T02:48:15.116877407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qft95,Uid:a3cb52d9-fd35-4ad9-b7f4-65800e46454f,Namespace:calico-system,Attempt:0,} returns sandbox id \"fe22b359ac4f72c1a466351f8b644dda9b776debaec609a0d098fc799125cb73\"" Jan 20 02:48:15.125626 containerd[1640]: time="2026-01-20T02:48:15.125576126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-574bcf7955-mxxqg,Uid:e2e1047e-5b1c-492c-8688-f19f69b40fdd,Namespace:calico-system,Attempt:0,} returns sandbox id \"9b11c0bb3f90598896829d6bcc1ca00a8def946677b2d9231b5bc29e2d25ee37\"" Jan 20 02:48:15.127568 kubelet[2963]: E0120 02:48:15.126937 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:48:15.127568 kubelet[2963]: E0120 02:48:15.127007 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:48:15.130064 containerd[1640]: time="2026-01-20T02:48:15.130028107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 20 02:48:15.717647 kubelet[2963]: E0120 02:48:15.714827 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:48:16.477299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount938995624.mount: Deactivated successfully. Jan 20 02:48:17.722191 kubelet[2963]: E0120 02:48:17.720690 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:48:19.863588 kubelet[2963]: E0120 02:48:19.846340 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:48:21.740935 kubelet[2963]: E0120 02:48:21.736686 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:48:23.721843 kubelet[2963]: E0120 02:48:23.715016 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:48:23.726583 containerd[1640]: time="2026-01-20T02:48:23.725440903Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:48:23.747209 containerd[1640]: time="2026-01-20T02:48:23.747149072Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35230631" Jan 20 02:48:23.760288 containerd[1640]: time="2026-01-20T02:48:23.758153029Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:48:23.787571 containerd[1640]: time="2026-01-20T02:48:23.782141827Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:48:23.787571 containerd[1640]: time="2026-01-20T02:48:23.783167564Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 8.652632239s" Jan 20 02:48:23.787571 containerd[1640]: time="2026-01-20T02:48:23.783193502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 20 02:48:23.793065 containerd[1640]: time="2026-01-20T02:48:23.791151120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 20 02:48:23.887535 containerd[1640]: time="2026-01-20T02:48:23.886590421Z" level=info msg="CreateContainer within sandbox \"9b11c0bb3f90598896829d6bcc1ca00a8def946677b2d9231b5bc29e2d25ee37\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 20 02:48:23.924548 containerd[1640]: time="2026-01-20T02:48:23.924441602Z" level=info msg="Container 415be170bb09c264e0e58f280bc9402c849ac52d9afdc75400a8ea0f8f72cee4: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:48:24.072700 containerd[1640]: time="2026-01-20T02:48:24.072550959Z" level=info msg="CreateContainer within sandbox \"9b11c0bb3f90598896829d6bcc1ca00a8def946677b2d9231b5bc29e2d25ee37\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"415be170bb09c264e0e58f280bc9402c849ac52d9afdc75400a8ea0f8f72cee4\"" Jan 20 02:48:24.085305 containerd[1640]: time="2026-01-20T02:48:24.082379231Z" level=info msg="StartContainer for \"415be170bb09c264e0e58f280bc9402c849ac52d9afdc75400a8ea0f8f72cee4\"" Jan 20 02:48:24.110563 containerd[1640]: time="2026-01-20T02:48:24.097792423Z" level=info msg="connecting to shim 415be170bb09c264e0e58f280bc9402c849ac52d9afdc75400a8ea0f8f72cee4" address="unix:///run/containerd/s/c2780ba81041a898e81d06cc5773130620d56659570f934a33003fe825545e94" protocol=ttrpc version=3 Jan 20 02:48:24.303221 systemd[1]: Started cri-containerd-415be170bb09c264e0e58f280bc9402c849ac52d9afdc75400a8ea0f8f72cee4.scope - libcontainer container 415be170bb09c264e0e58f280bc9402c849ac52d9afdc75400a8ea0f8f72cee4. Jan 20 02:48:24.627629 kernel: kauditd_printk_skb: 46 callbacks suppressed Jan 20 02:48:24.627794 kernel: audit: type=1334 audit(1768877304.586:550): prog-id=161 op=LOAD Jan 20 02:48:24.627844 kernel: audit: type=1334 audit(1768877304.598:551): prog-id=162 op=LOAD Jan 20 02:48:24.586000 audit: BPF prog-id=161 op=LOAD Jan 20 02:48:24.598000 audit: BPF prog-id=162 op=LOAD Jan 20 02:48:24.598000 audit[3575]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3417 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:24.684991 kernel: audit: type=1300 audit(1768877304.598:551): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3417 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:24.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431356265313730626230396332363465306535386632383062633934 Jan 20 02:48:24.751851 kernel: audit: type=1327 audit(1768877304.598:551): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431356265313730626230396332363465306535386632383062633934 Jan 20 02:48:24.752018 kernel: audit: type=1334 audit(1768877304.598:552): prog-id=162 op=UNLOAD Jan 20 02:48:24.598000 audit: BPF prog-id=162 op=UNLOAD Jan 20 02:48:24.598000 audit[3575]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3417 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:24.789639 kernel: audit: type=1300 audit(1768877304.598:552): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3417 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:24.789784 kernel: audit: type=1327 audit(1768877304.598:552): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431356265313730626230396332363465306535386632383062633934 Jan 20 02:48:24.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431356265313730626230396332363465306535386632383062633934 Jan 20 02:48:24.598000 audit: BPF prog-id=163 op=LOAD Jan 20 02:48:24.879777 kernel: audit: type=1334 audit(1768877304.598:553): prog-id=163 op=LOAD Jan 20 02:48:24.879924 kernel: audit: type=1300 audit(1768877304.598:553): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3417 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:24.598000 audit[3575]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3417 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:24.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431356265313730626230396332363465306535386632383062633934 Jan 20 02:48:24.598000 audit: BPF prog-id=164 op=LOAD Jan 20 02:48:24.977046 kernel: audit: type=1327 audit(1768877304.598:553): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431356265313730626230396332363465306535386632383062633934 Jan 20 02:48:24.598000 audit[3575]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3417 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:24.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431356265313730626230396332363465306535386632383062633934 Jan 20 02:48:24.598000 audit: BPF prog-id=164 op=UNLOAD Jan 20 02:48:24.598000 audit[3575]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3417 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:24.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431356265313730626230396332363465306535386632383062633934 Jan 20 02:48:24.598000 audit: BPF prog-id=163 op=UNLOAD Jan 20 02:48:24.598000 audit[3575]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3417 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:24.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431356265313730626230396332363465306535386632383062633934 Jan 20 02:48:24.598000 audit: BPF prog-id=165 op=LOAD Jan 20 02:48:24.598000 audit[3575]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3417 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:24.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431356265313730626230396332363465306535386632383062633934 Jan 20 02:48:25.013816 containerd[1640]: time="2026-01-20T02:48:25.013628021Z" level=info msg="StartContainer for \"415be170bb09c264e0e58f280bc9402c849ac52d9afdc75400a8ea0f8f72cee4\" returns successfully" Jan 20 02:48:25.212873 kubelet[2963]: E0120 02:48:25.207358 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:48:25.313158 kubelet[2963]: E0120 02:48:25.312977 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:25.313454 kubelet[2963]: W0120 02:48:25.313343 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:25.313771 kubelet[2963]: E0120 02:48:25.313681 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:25.318009 kubelet[2963]: E0120 02:48:25.317937 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:25.318150 kubelet[2963]: W0120 02:48:25.318129 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:25.318659 kubelet[2963]: E0120 02:48:25.318376 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:25.319802 kubelet[2963]: E0120 02:48:25.319783 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:25.320152 kubelet[2963]: W0120 02:48:25.319927 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:25.320152 kubelet[2963]: E0120 02:48:25.319954 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:25.320948 kubelet[2963]: E0120 02:48:25.320924 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:25.321117 kubelet[2963]: W0120 02:48:25.321026 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:25.321223 kubelet[2963]: E0120 02:48:25.321205 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:25.356290 kubelet[2963]: E0120 02:48:25.341137 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:25.356290 kubelet[2963]: W0120 02:48:25.341177 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:25.356290 kubelet[2963]: E0120 02:48:25.341209 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:25.356290 kubelet[2963]: E0120 02:48:25.348984 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:25.356290 kubelet[2963]: W0120 02:48:25.349009 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:25.356290 kubelet[2963]: E0120 02:48:25.349033 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:25.356290 kubelet[2963]: E0120 02:48:25.354772 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:25.356290 kubelet[2963]: W0120 02:48:25.354793 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:25.356290 kubelet[2963]: E0120 02:48:25.354815 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:25.390288 kubelet[2963]: E0120 02:48:25.385641 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:25.390288 kubelet[2963]: W0120 02:48:25.385670 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:25.390288 kubelet[2963]: E0120 02:48:25.385699 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:25.394316 kubelet[2963]: E0120 02:48:25.393774 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:25.394316 kubelet[2963]: W0120 02:48:25.393827 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:25.394316 kubelet[2963]: E0120 02:48:25.393857 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:25.415938 kubelet[2963]: E0120 02:48:25.415814 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:25.415938 kubelet[2963]: W0120 02:48:25.415878 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:25.415938 kubelet[2963]: E0120 02:48:25.415913 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:25.423567 kubelet[2963]: E0120 02:48:25.421971 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:25.423567 kubelet[2963]: W0120 02:48:25.421991 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:25.423567 kubelet[2963]: E0120 02:48:25.422013 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:25.432174 kubelet[2963]: E0120 02:48:25.432004 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:25.432174 kubelet[2963]: W0120 02:48:25.432106 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:25.432450 kubelet[2963]: E0120 02:48:25.432368 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:25.433361 kubelet[2963]: E0120 02:48:25.433264 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:25.433361 kubelet[2963]: W0120 02:48:25.433284 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:25.433570 kubelet[2963]: E0120 02:48:25.433302 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:25.434321 kubelet[2963]: E0120 02:48:25.434305 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:25.445847 kubelet[2963]: W0120 02:48:25.445655 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:25.445847 kubelet[2963]: E0120 02:48:25.445772 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:25.460109 kubelet[2963]: E0120 02:48:25.460071 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:25.460447 kubelet[2963]: W0120 02:48:25.460329 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:25.477424 kubelet[2963]: E0120 02:48:25.463238 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:25.479030 kubelet[2963]: E0120 02:48:25.478955 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:25.479206 kubelet[2963]: W0120 02:48:25.479184 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:25.479563 kubelet[2963]: E0120 02:48:25.479539 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:25.506365 kubelet[2963]: I0120 02:48:25.497796 2963 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-574bcf7955-mxxqg" podStartSLOduration=4.836777837 podStartE2EDuration="13.497720483s" podCreationTimestamp="2026-01-20 02:48:12 +0000 UTC" firstStartedPulling="2026-01-20 02:48:15.128008785 +0000 UTC m=+74.383075233" lastFinishedPulling="2026-01-20 02:48:23.788951432 +0000 UTC m=+83.044017879" observedRunningTime="2026-01-20 02:48:25.496133156 +0000 UTC m=+84.751199614" watchObservedRunningTime="2026-01-20 02:48:25.497720483 +0000 UTC m=+84.752786931" Jan 20 02:48:25.507005 kubelet[2963]: E0120 02:48:25.506931 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:25.516279 kubelet[2963]: W0120 02:48:25.516227 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:25.516551 kubelet[2963]: E0120 02:48:25.516526 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:25.524615 kubelet[2963]: E0120 02:48:25.524347 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:25.525044 kubelet[2963]: W0120 02:48:25.524960 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:25.525981 kubelet[2963]: E0120 02:48:25.525956 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:25.528132 kubelet[2963]: E0120 02:48:25.528023 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:25.537608 kubelet[2963]: W0120 02:48:25.530586 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:25.537608 kubelet[2963]: E0120 02:48:25.530622 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:25.537608 kubelet[2963]: E0120 02:48:25.536864 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:25.537608 kubelet[2963]: W0120 02:48:25.536887 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:25.537608 kubelet[2963]: E0120 02:48:25.536911 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:25.541115 kubelet[2963]: E0120 02:48:25.538239 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:25.541115 kubelet[2963]: W0120 02:48:25.538283 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:25.541115 kubelet[2963]: E0120 02:48:25.538304 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:25.545885 kubelet[2963]: E0120 02:48:25.545616 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:25.545885 kubelet[2963]: W0120 02:48:25.545659 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:25.553289 kubelet[2963]: E0120 02:48:25.547845 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:25.572671 kubelet[2963]: E0120 02:48:25.564194 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:25.572671 kubelet[2963]: W0120 02:48:25.568958 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:25.572671 kubelet[2963]: E0120 02:48:25.569082 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:25.592868 kubelet[2963]: E0120 02:48:25.580856 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:25.592868 kubelet[2963]: W0120 02:48:25.591140 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:25.592868 kubelet[2963]: E0120 02:48:25.591303 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:25.621022 kubelet[2963]: E0120 02:48:25.620940 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:25.621022 kubelet[2963]: W0120 02:48:25.621007 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:25.621216 kubelet[2963]: E0120 02:48:25.621037 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:25.621434 kubelet[2963]: E0120 02:48:25.621318 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:25.621434 kubelet[2963]: W0120 02:48:25.621366 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:25.621434 kubelet[2963]: E0120 02:48:25.621424 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:25.630529 kubelet[2963]: E0120 02:48:25.621726 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:25.630529 kubelet[2963]: W0120 02:48:25.621754 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:25.630529 kubelet[2963]: E0120 02:48:25.621768 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:25.630529 kubelet[2963]: E0120 02:48:25.622017 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:25.630529 kubelet[2963]: W0120 02:48:25.622026 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:25.630529 kubelet[2963]: E0120 02:48:25.622037 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:25.630529 kubelet[2963]: E0120 02:48:25.622701 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:25.630529 kubelet[2963]: W0120 02:48:25.622715 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:25.630529 kubelet[2963]: E0120 02:48:25.622727 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:25.630529 kubelet[2963]: E0120 02:48:25.622997 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:25.631024 kubelet[2963]: W0120 02:48:25.623008 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:25.631024 kubelet[2963]: E0120 02:48:25.623020 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:25.636467 kubelet[2963]: E0120 02:48:25.634237 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:25.636467 kubelet[2963]: W0120 02:48:25.634266 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:25.636467 kubelet[2963]: E0120 02:48:25.634291 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:25.636467 kubelet[2963]: E0120 02:48:25.634709 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:25.636467 kubelet[2963]: W0120 02:48:25.634722 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:25.636467 kubelet[2963]: E0120 02:48:25.634736 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:25.642799 kubelet[2963]: E0120 02:48:25.638661 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:25.642799 kubelet[2963]: W0120 02:48:25.638707 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:25.642799 kubelet[2963]: E0120 02:48:25.638732 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:25.725534 kubelet[2963]: E0120 02:48:25.723929 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:48:26.234445 kubelet[2963]: E0120 02:48:26.234120 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:48:26.339955 kubelet[2963]: E0120 02:48:26.333293 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:26.339955 kubelet[2963]: W0120 02:48:26.333320 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:26.339955 kubelet[2963]: E0120 02:48:26.333342 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:26.340816 kubelet[2963]: E0120 02:48:26.340661 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:26.341047 kubelet[2963]: W0120 02:48:26.340966 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:26.341224 kubelet[2963]: E0120 02:48:26.341141 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:26.376942 kubelet[2963]: E0120 02:48:26.376801 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:26.376942 kubelet[2963]: W0120 02:48:26.376852 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:26.376942 kubelet[2963]: E0120 02:48:26.376878 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:26.383942 kubelet[2963]: E0120 02:48:26.380740 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:26.383942 kubelet[2963]: W0120 02:48:26.380786 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:26.383942 kubelet[2963]: E0120 02:48:26.380809 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:26.383942 kubelet[2963]: E0120 02:48:26.381120 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:26.383942 kubelet[2963]: W0120 02:48:26.381130 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:26.383942 kubelet[2963]: E0120 02:48:26.381145 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:26.383942 kubelet[2963]: E0120 02:48:26.383517 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:26.383942 kubelet[2963]: W0120 02:48:26.383535 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:26.383942 kubelet[2963]: E0120 02:48:26.383553 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:26.383942 kubelet[2963]: E0120 02:48:26.383830 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:26.384373 kubelet[2963]: W0120 02:48:26.383841 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:26.384373 kubelet[2963]: E0120 02:48:26.383853 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:26.391350 kubelet[2963]: E0120 02:48:26.390033 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:26.391350 kubelet[2963]: W0120 02:48:26.390164 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:26.391350 kubelet[2963]: E0120 02:48:26.390193 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:26.400287 kubelet[2963]: E0120 02:48:26.400025 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:26.407740 kubelet[2963]: W0120 02:48:26.404429 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:26.407740 kubelet[2963]: E0120 02:48:26.404724 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:26.411533 kubelet[2963]: E0120 02:48:26.409996 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:26.411533 kubelet[2963]: W0120 02:48:26.410093 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:26.411533 kubelet[2963]: E0120 02:48:26.410116 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:26.433113 kubelet[2963]: E0120 02:48:26.425011 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:26.433113 kubelet[2963]: W0120 02:48:26.429282 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:26.433113 kubelet[2963]: E0120 02:48:26.429433 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:26.445757 kubelet[2963]: E0120 02:48:26.434924 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:26.445757 kubelet[2963]: W0120 02:48:26.435027 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:26.445757 kubelet[2963]: E0120 02:48:26.435053 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:26.458085 kubelet[2963]: E0120 02:48:26.457279 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:26.458085 kubelet[2963]: W0120 02:48:26.457560 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:26.458085 kubelet[2963]: E0120 02:48:26.457595 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:26.462856 kubelet[2963]: E0120 02:48:26.462828 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:26.463191 kubelet[2963]: W0120 02:48:26.463165 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:26.463690 kubelet[2963]: E0120 02:48:26.463538 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:26.472463 kubelet[2963]: E0120 02:48:26.472430 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:26.472683 kubelet[2963]: W0120 02:48:26.472658 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:26.472782 kubelet[2963]: E0120 02:48:26.472762 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:26.481365 kubelet[2963]: E0120 02:48:26.481291 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:26.481365 kubelet[2963]: W0120 02:48:26.481349 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:26.481365 kubelet[2963]: E0120 02:48:26.481419 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:26.481365 kubelet[2963]: E0120 02:48:26.481951 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:26.481365 kubelet[2963]: W0120 02:48:26.481965 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:26.481365 kubelet[2963]: E0120 02:48:26.481986 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:26.496910 kubelet[2963]: E0120 02:48:26.486624 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:26.496910 kubelet[2963]: W0120 02:48:26.486651 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:26.496910 kubelet[2963]: E0120 02:48:26.486678 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:26.496910 kubelet[2963]: E0120 02:48:26.490027 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:26.496910 kubelet[2963]: W0120 02:48:26.490045 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:26.496910 kubelet[2963]: E0120 02:48:26.490062 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:26.496910 kubelet[2963]: E0120 02:48:26.491743 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:26.496910 kubelet[2963]: W0120 02:48:26.492829 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:26.496910 kubelet[2963]: E0120 02:48:26.492848 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:26.501164 kubelet[2963]: E0120 02:48:26.498043 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:26.501164 kubelet[2963]: W0120 02:48:26.498132 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:26.501164 kubelet[2963]: E0120 02:48:26.498151 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:26.501546 kubelet[2963]: E0120 02:48:26.501171 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:26.501546 kubelet[2963]: W0120 02:48:26.501190 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:26.501546 kubelet[2963]: E0120 02:48:26.501208 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:26.510543 kubelet[2963]: E0120 02:48:26.510046 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:26.510543 kubelet[2963]: W0120 02:48:26.510068 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:26.510543 kubelet[2963]: E0120 02:48:26.510091 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:26.520046 kubelet[2963]: E0120 02:48:26.513833 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:26.520046 kubelet[2963]: W0120 02:48:26.513856 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:26.520046 kubelet[2963]: E0120 02:48:26.517579 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:26.520046 kubelet[2963]: E0120 02:48:26.518010 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:26.520046 kubelet[2963]: W0120 02:48:26.518024 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:26.520046 kubelet[2963]: E0120 02:48:26.518042 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:26.520046 kubelet[2963]: E0120 02:48:26.518926 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:26.520046 kubelet[2963]: W0120 02:48:26.518940 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:26.520046 kubelet[2963]: E0120 02:48:26.518957 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:26.529527 kubelet[2963]: E0120 02:48:26.525855 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:26.529527 kubelet[2963]: W0120 02:48:26.525880 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:26.529527 kubelet[2963]: E0120 02:48:26.525903 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:26.569226 kubelet[2963]: E0120 02:48:26.564604 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:26.569226 kubelet[2963]: W0120 02:48:26.564656 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:26.569226 kubelet[2963]: E0120 02:48:26.564685 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:26.569226 kubelet[2963]: E0120 02:48:26.565982 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:26.569226 kubelet[2963]: W0120 02:48:26.566000 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:26.569226 kubelet[2963]: E0120 02:48:26.566022 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:26.569226 kubelet[2963]: E0120 02:48:26.567808 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:26.569226 kubelet[2963]: W0120 02:48:26.567826 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:26.569226 kubelet[2963]: E0120 02:48:26.567896 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:26.607460 kubelet[2963]: E0120 02:48:26.604466 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:26.607460 kubelet[2963]: W0120 02:48:26.604598 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:26.607460 kubelet[2963]: E0120 02:48:26.604627 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:26.618519 kubelet[2963]: E0120 02:48:26.612922 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:26.618519 kubelet[2963]: W0120 02:48:26.613277 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:26.618519 kubelet[2963]: E0120 02:48:26.613701 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:26.636259 kubelet[2963]: E0120 02:48:26.622168 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:26.636259 kubelet[2963]: W0120 02:48:26.622569 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:26.636259 kubelet[2963]: E0120 02:48:26.622604 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:27.184590 containerd[1640]: time="2026-01-20T02:48:27.184424187Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:48:27.216855 containerd[1640]: time="2026-01-20T02:48:27.209690374Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4442579" Jan 20 02:48:27.216855 containerd[1640]: time="2026-01-20T02:48:27.215266126Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:48:27.252563 containerd[1640]: time="2026-01-20T02:48:27.245366220Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:48:27.257551 containerd[1640]: time="2026-01-20T02:48:27.254175097Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 3.462978754s" Jan 20 02:48:27.257551 containerd[1640]: time="2026-01-20T02:48:27.254224850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 20 02:48:27.296859 kubelet[2963]: E0120 02:48:27.296814 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:48:27.309899 containerd[1640]: time="2026-01-20T02:48:27.299555570Z" level=info msg="CreateContainer within sandbox \"fe22b359ac4f72c1a466351f8b644dda9b776debaec609a0d098fc799125cb73\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 20 02:48:27.312252 kubelet[2963]: E0120 02:48:27.310340 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:27.318914 kubelet[2963]: W0120 02:48:27.318537 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:27.318914 kubelet[2963]: E0120 02:48:27.318581 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:27.326134 kubelet[2963]: E0120 02:48:27.325621 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:27.326134 kubelet[2963]: W0120 02:48:27.325641 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:27.326134 kubelet[2963]: E0120 02:48:27.325664 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:27.334145 kubelet[2963]: E0120 02:48:27.331278 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:27.334145 kubelet[2963]: W0120 02:48:27.331652 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:27.334145 kubelet[2963]: E0120 02:48:27.331683 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:27.336850 kubelet[2963]: E0120 02:48:27.336757 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:27.340903 kubelet[2963]: W0120 02:48:27.340870 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:27.341048 kubelet[2963]: E0120 02:48:27.341027 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:27.350099 kubelet[2963]: E0120 02:48:27.350068 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:27.350279 kubelet[2963]: W0120 02:48:27.350253 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:27.350419 kubelet[2963]: E0120 02:48:27.350363 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:27.355887 kubelet[2963]: E0120 02:48:27.355856 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:27.356062 kubelet[2963]: W0120 02:48:27.356036 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:27.356164 kubelet[2963]: E0120 02:48:27.356148 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:27.362661 kubelet[2963]: E0120 02:48:27.362469 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:27.362661 kubelet[2963]: W0120 02:48:27.362534 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:27.362661 kubelet[2963]: E0120 02:48:27.362558 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:27.363672 kubelet[2963]: E0120 02:48:27.363606 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:27.363672 kubelet[2963]: W0120 02:48:27.363624 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:27.363672 kubelet[2963]: E0120 02:48:27.363642 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:27.368218 kubelet[2963]: E0120 02:48:27.368116 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:27.368218 kubelet[2963]: W0120 02:48:27.368137 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:27.368218 kubelet[2963]: E0120 02:48:27.368155 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:27.369187 kubelet[2963]: E0120 02:48:27.369036 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:27.369187 kubelet[2963]: W0120 02:48:27.369051 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:27.369187 kubelet[2963]: E0120 02:48:27.369066 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:27.370259 kubelet[2963]: E0120 02:48:27.370243 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:27.370366 kubelet[2963]: W0120 02:48:27.370346 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:27.370552 kubelet[2963]: E0120 02:48:27.370533 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:27.377204 kubelet[2963]: E0120 02:48:27.377066 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:27.377204 kubelet[2963]: W0120 02:48:27.377091 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:27.377204 kubelet[2963]: E0120 02:48:27.377110 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:27.391305 kubelet[2963]: E0120 02:48:27.391086 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:27.393351 kubelet[2963]: W0120 02:48:27.393052 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:27.393351 kubelet[2963]: E0120 02:48:27.393126 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:27.400877 kubelet[2963]: E0120 02:48:27.400844 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:27.401209 kubelet[2963]: W0120 02:48:27.400995 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:27.401209 kubelet[2963]: E0120 02:48:27.401028 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:27.411582 kubelet[2963]: E0120 02:48:27.411438 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:27.425237 kubelet[2963]: W0120 02:48:27.411688 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:27.425237 kubelet[2963]: E0120 02:48:27.411723 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:27.473286 containerd[1640]: time="2026-01-20T02:48:27.473038138Z" level=info msg="Container f8c9062d394a6ff78a76ac4900d8f660fda6f23fdf2cb8a72abba830f50242cb: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:48:27.475985 kubelet[2963]: E0120 02:48:27.475944 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:27.476301 kubelet[2963]: W0120 02:48:27.476102 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:27.476301 kubelet[2963]: E0120 02:48:27.476143 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:27.476985 kubelet[2963]: E0120 02:48:27.476921 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:27.476985 kubelet[2963]: W0120 02:48:27.476941 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:27.476985 kubelet[2963]: E0120 02:48:27.476961 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:27.484638 kubelet[2963]: E0120 02:48:27.484597 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:27.485002 kubelet[2963]: W0120 02:48:27.484759 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:27.485002 kubelet[2963]: E0120 02:48:27.484792 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:27.493581 kubelet[2963]: E0120 02:48:27.488884 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:27.493581 kubelet[2963]: W0120 02:48:27.488923 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:27.493581 kubelet[2963]: E0120 02:48:27.489033 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:27.493581 kubelet[2963]: E0120 02:48:27.489463 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:27.493581 kubelet[2963]: W0120 02:48:27.489530 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:27.493581 kubelet[2963]: E0120 02:48:27.489550 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:27.493581 kubelet[2963]: E0120 02:48:27.489933 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:27.493581 kubelet[2963]: W0120 02:48:27.489947 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:27.493581 kubelet[2963]: E0120 02:48:27.489967 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:27.493581 kubelet[2963]: E0120 02:48:27.490660 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:27.498300 kubelet[2963]: W0120 02:48:27.490673 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:27.498300 kubelet[2963]: E0120 02:48:27.490688 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:27.498300 kubelet[2963]: E0120 02:48:27.491037 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:27.498300 kubelet[2963]: W0120 02:48:27.491051 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:27.498300 kubelet[2963]: E0120 02:48:27.491063 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:27.498300 kubelet[2963]: E0120 02:48:27.491343 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:27.498300 kubelet[2963]: W0120 02:48:27.491359 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:27.498652 kubelet[2963]: E0120 02:48:27.498562 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:27.502922 kubelet[2963]: E0120 02:48:27.501023 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:27.502922 kubelet[2963]: W0120 02:48:27.501063 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:27.502922 kubelet[2963]: E0120 02:48:27.501081 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:27.502922 kubelet[2963]: E0120 02:48:27.501368 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:27.502922 kubelet[2963]: W0120 02:48:27.501421 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:27.502922 kubelet[2963]: E0120 02:48:27.501434 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:27.502922 kubelet[2963]: E0120 02:48:27.501734 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:27.502922 kubelet[2963]: W0120 02:48:27.501745 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:27.502922 kubelet[2963]: E0120 02:48:27.501756 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:27.514205 kubelet[2963]: E0120 02:48:27.514111 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:27.514205 kubelet[2963]: W0120 02:48:27.514177 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:27.514205 kubelet[2963]: E0120 02:48:27.514209 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:27.530345 kubelet[2963]: E0120 02:48:27.528744 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:27.530345 kubelet[2963]: W0120 02:48:27.528779 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:27.530345 kubelet[2963]: E0120 02:48:27.528808 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:27.533564 kubelet[2963]: E0120 02:48:27.531036 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:27.533564 kubelet[2963]: W0120 02:48:27.531079 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:27.533564 kubelet[2963]: E0120 02:48:27.531102 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:27.544601 kubelet[2963]: E0120 02:48:27.543575 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:27.544601 kubelet[2963]: W0120 02:48:27.543621 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:27.544601 kubelet[2963]: E0120 02:48:27.543646 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:27.547701 kubelet[2963]: E0120 02:48:27.546813 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:27.558646 kubelet[2963]: W0120 02:48:27.556543 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:27.558646 kubelet[2963]: E0120 02:48:27.556624 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:27.571281 kubelet[2963]: E0120 02:48:27.569850 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:27.571281 kubelet[2963]: W0120 02:48:27.569887 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:27.571281 kubelet[2963]: E0120 02:48:27.569916 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:27.685019 containerd[1640]: time="2026-01-20T02:48:27.684462910Z" level=info msg="CreateContainer within sandbox \"fe22b359ac4f72c1a466351f8b644dda9b776debaec609a0d098fc799125cb73\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f8c9062d394a6ff78a76ac4900d8f660fda6f23fdf2cb8a72abba830f50242cb\"" Jan 20 02:48:27.687589 containerd[1640]: time="2026-01-20T02:48:27.687547047Z" level=info msg="StartContainer for \"f8c9062d394a6ff78a76ac4900d8f660fda6f23fdf2cb8a72abba830f50242cb\"" Jan 20 02:48:27.696508 containerd[1640]: time="2026-01-20T02:48:27.695226726Z" level=info msg="connecting to shim f8c9062d394a6ff78a76ac4900d8f660fda6f23fdf2cb8a72abba830f50242cb" address="unix:///run/containerd/s/d8ec0a28bcc384d55232d12ce135ed8167c4dec58bcf840747662ab19c1f5f8f" protocol=ttrpc version=3 Jan 20 02:48:27.719251 kubelet[2963]: E0120 02:48:27.719036 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:48:27.954268 systemd[1]: Started cri-containerd-f8c9062d394a6ff78a76ac4900d8f660fda6f23fdf2cb8a72abba830f50242cb.scope - libcontainer container f8c9062d394a6ff78a76ac4900d8f660fda6f23fdf2cb8a72abba830f50242cb. Jan 20 02:48:27.969000 audit[3735]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=3735 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:48:27.969000 audit[3735]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff7bf8b380 a2=0 a3=7fff7bf8b36c items=0 ppid=3069 pid=3735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:27.969000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:48:28.009000 audit[3735]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=3735 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:48:28.009000 audit[3735]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff7bf8b380 a2=0 a3=7fff7bf8b36c items=0 ppid=3069 pid=3735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:28.009000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:48:28.407562 kubelet[2963]: E0120 02:48:28.404945 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:48:28.483574 kubelet[2963]: E0120 02:48:28.480759 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:28.483574 kubelet[2963]: W0120 02:48:28.480815 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:28.483574 kubelet[2963]: E0120 02:48:28.480845 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:28.516566 kubelet[2963]: E0120 02:48:28.495857 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:28.516566 kubelet[2963]: W0120 02:48:28.495895 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:28.516566 kubelet[2963]: E0120 02:48:28.495925 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:28.516566 kubelet[2963]: E0120 02:48:28.514349 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:28.516566 kubelet[2963]: W0120 02:48:28.514418 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:28.516566 kubelet[2963]: E0120 02:48:28.514457 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:28.525571 kubelet[2963]: E0120 02:48:28.520773 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:28.525571 kubelet[2963]: W0120 02:48:28.520834 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:28.525571 kubelet[2963]: E0120 02:48:28.520859 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:28.525807 kubelet[2963]: E0120 02:48:28.525783 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:28.525872 kubelet[2963]: W0120 02:48:28.525805 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:28.525872 kubelet[2963]: E0120 02:48:28.525833 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:28.535072 kubelet[2963]: E0120 02:48:28.526888 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:28.535072 kubelet[2963]: W0120 02:48:28.526942 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:28.535072 kubelet[2963]: E0120 02:48:28.526974 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:28.544199 kubelet[2963]: E0120 02:48:28.535414 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:28.544199 kubelet[2963]: W0120 02:48:28.535450 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:28.544199 kubelet[2963]: E0120 02:48:28.535535 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:28.544199 kubelet[2963]: E0120 02:48:28.541778 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:28.544199 kubelet[2963]: W0120 02:48:28.541812 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:28.544199 kubelet[2963]: E0120 02:48:28.541840 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:28.554541 kubelet[2963]: E0120 02:48:28.546673 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:28.554541 kubelet[2963]: W0120 02:48:28.546739 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:28.554541 kubelet[2963]: E0120 02:48:28.546770 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:28.571602 kubelet[2963]: E0120 02:48:28.564825 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:28.571602 kubelet[2963]: W0120 02:48:28.564892 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:28.571602 kubelet[2963]: E0120 02:48:28.564924 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:28.571602 kubelet[2963]: E0120 02:48:28.565325 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:28.571602 kubelet[2963]: W0120 02:48:28.565336 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:28.571602 kubelet[2963]: E0120 02:48:28.565351 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:28.580592 kubelet[2963]: E0120 02:48:28.574834 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:28.580592 kubelet[2963]: W0120 02:48:28.574892 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:28.580592 kubelet[2963]: E0120 02:48:28.574920 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:28.580592 kubelet[2963]: E0120 02:48:28.576605 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:28.580592 kubelet[2963]: W0120 02:48:28.576620 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:28.580592 kubelet[2963]: E0120 02:48:28.576639 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:28.580592 kubelet[2963]: E0120 02:48:28.576912 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:28.580592 kubelet[2963]: W0120 02:48:28.576924 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:28.580592 kubelet[2963]: E0120 02:48:28.576939 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:28.582592 kubelet[2963]: E0120 02:48:28.582566 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:28.582978 kubelet[2963]: W0120 02:48:28.582695 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:28.582978 kubelet[2963]: E0120 02:48:28.582725 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:28.587641 kubelet[2963]: E0120 02:48:28.587615 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:28.587752 kubelet[2963]: W0120 02:48:28.587732 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:28.587841 kubelet[2963]: E0120 02:48:28.587822 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:28.610588 kubelet[2963]: E0120 02:48:28.610437 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:28.611339 kubelet[2963]: W0120 02:48:28.611121 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:28.611815 kubelet[2963]: E0120 02:48:28.611685 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:28.627803 kubelet[2963]: E0120 02:48:28.626286 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:28.627803 kubelet[2963]: W0120 02:48:28.626331 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:28.627803 kubelet[2963]: E0120 02:48:28.626361 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:28.638008 kubelet[2963]: E0120 02:48:28.637796 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:28.638008 kubelet[2963]: W0120 02:48:28.637888 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:28.638697 kubelet[2963]: E0120 02:48:28.637919 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:28.655061 kubelet[2963]: E0120 02:48:28.642110 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:28.655061 kubelet[2963]: W0120 02:48:28.642217 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:28.655061 kubelet[2963]: E0120 02:48:28.642248 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:28.662862 kubelet[2963]: E0120 02:48:28.658031 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:28.662862 kubelet[2963]: W0120 02:48:28.658089 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:28.662862 kubelet[2963]: E0120 02:48:28.658121 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:28.662862 kubelet[2963]: E0120 02:48:28.659217 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:28.662862 kubelet[2963]: W0120 02:48:28.659232 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:28.662862 kubelet[2963]: E0120 02:48:28.659250 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:28.668569 kubelet[2963]: E0120 02:48:28.667346 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:28.668569 kubelet[2963]: W0120 02:48:28.667410 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:28.668569 kubelet[2963]: E0120 02:48:28.667436 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:28.698974 kubelet[2963]: E0120 02:48:28.690758 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:28.698974 kubelet[2963]: W0120 02:48:28.690837 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:28.698974 kubelet[2963]: E0120 02:48:28.690863 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:28.705547 kubelet[2963]: E0120 02:48:28.701244 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:28.705547 kubelet[2963]: W0120 02:48:28.701266 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:28.705547 kubelet[2963]: E0120 02:48:28.701290 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:28.719833 kubelet[2963]: E0120 02:48:28.710987 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:28.719833 kubelet[2963]: W0120 02:48:28.711096 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:28.719833 kubelet[2963]: E0120 02:48:28.711130 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:28.720323 kubelet[2963]: E0120 02:48:28.720299 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:28.720556 kubelet[2963]: W0120 02:48:28.720524 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:28.720660 kubelet[2963]: E0120 02:48:28.720639 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:28.731000 audit: BPF prog-id=166 op=LOAD Jan 20 02:48:28.731000 audit[3720]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3481 pid=3720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:28.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638633930363264333934613666663738613736616334393030643866 Jan 20 02:48:28.731000 audit: BPF prog-id=167 op=LOAD Jan 20 02:48:28.731000 audit[3720]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3481 pid=3720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:28.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638633930363264333934613666663738613736616334393030643866 Jan 20 02:48:28.731000 audit: BPF prog-id=167 op=UNLOAD Jan 20 02:48:28.731000 audit[3720]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3481 pid=3720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:28.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638633930363264333934613666663738613736616334393030643866 Jan 20 02:48:28.731000 audit: BPF prog-id=166 op=UNLOAD Jan 20 02:48:28.731000 audit[3720]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3481 pid=3720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:28.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638633930363264333934613666663738613736616334393030643866 Jan 20 02:48:28.731000 audit: BPF prog-id=168 op=LOAD Jan 20 02:48:28.731000 audit[3720]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3481 pid=3720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:28.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638633930363264333934613666663738613736616334393030643866 Jan 20 02:48:28.758268 kubelet[2963]: E0120 02:48:28.755297 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:28.758268 kubelet[2963]: W0120 02:48:28.755557 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:28.758268 kubelet[2963]: E0120 02:48:28.755595 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:28.758268 kubelet[2963]: E0120 02:48:28.757057 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:28.758268 kubelet[2963]: W0120 02:48:28.757147 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:28.758268 kubelet[2963]: E0120 02:48:28.757171 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:28.761073 kubelet[2963]: E0120 02:48:28.759990 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:28.761073 kubelet[2963]: W0120 02:48:28.760031 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:28.761073 kubelet[2963]: E0120 02:48:28.760051 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:28.785950 kubelet[2963]: E0120 02:48:28.769259 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:28.785950 kubelet[2963]: W0120 02:48:28.769289 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:28.785950 kubelet[2963]: E0120 02:48:28.769314 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:28.785950 kubelet[2963]: E0120 02:48:28.785209 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:28.785950 kubelet[2963]: W0120 02:48:28.785238 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:28.785950 kubelet[2963]: E0120 02:48:28.785267 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:28.814034 kubelet[2963]: E0120 02:48:28.788332 2963 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:48:28.814034 kubelet[2963]: W0120 02:48:28.788404 2963 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:48:28.814034 kubelet[2963]: E0120 02:48:28.788427 2963 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:48:29.118560 containerd[1640]: time="2026-01-20T02:48:29.116973562Z" level=info msg="StartContainer for \"f8c9062d394a6ff78a76ac4900d8f660fda6f23fdf2cb8a72abba830f50242cb\" returns successfully" Jan 20 02:48:29.234701 systemd[1]: cri-containerd-f8c9062d394a6ff78a76ac4900d8f660fda6f23fdf2cb8a72abba830f50242cb.scope: Deactivated successfully. Jan 20 02:48:29.246828 containerd[1640]: time="2026-01-20T02:48:29.245968025Z" level=info msg="received container exit event container_id:\"f8c9062d394a6ff78a76ac4900d8f660fda6f23fdf2cb8a72abba830f50242cb\" id:\"f8c9062d394a6ff78a76ac4900d8f660fda6f23fdf2cb8a72abba830f50242cb\" pid:3737 exited_at:{seconds:1768877309 nanos:244916750}" Jan 20 02:48:29.252000 audit: BPF prog-id=168 op=UNLOAD Jan 20 02:48:29.379414 kubelet[2963]: E0120 02:48:29.379172 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:48:29.491049 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8c9062d394a6ff78a76ac4900d8f660fda6f23fdf2cb8a72abba830f50242cb-rootfs.mount: Deactivated successfully. Jan 20 02:48:29.807145 kubelet[2963]: E0120 02:48:29.719156 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:48:30.418104 kubelet[2963]: E0120 02:48:30.417950 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:48:30.436774 containerd[1640]: time="2026-01-20T02:48:30.434651035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 20 02:48:31.723688 kubelet[2963]: E0120 02:48:31.718791 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:48:32.896648 kubelet[2963]: E0120 02:48:32.884951 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:48:34.046136 kubelet[2963]: E0120 02:48:34.045263 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:48:34.197187 kubelet[2963]: E0120 02:48:34.195770 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:48:35.792389 kubelet[2963]: E0120 02:48:35.789026 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:48:36.928893 kubelet[2963]: E0120 02:48:36.926752 2963 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.142s" Jan 20 02:48:37.540049 kubelet[2963]: E0120 02:48:37.534629 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:48:39.443884 kubelet[2963]: E0120 02:48:39.443215 2963 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.709s" Jan 20 02:48:39.446553 kubelet[2963]: E0120 02:48:39.445614 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:48:39.452271 kubelet[2963]: E0120 02:48:39.452234 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:48:40.715260 kubelet[2963]: E0120 02:48:40.715146 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:48:42.726109 kubelet[2963]: E0120 02:48:42.724188 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:48:44.722202 kubelet[2963]: E0120 02:48:44.721714 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:48:46.716467 kubelet[2963]: E0120 02:48:46.713719 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:48:48.715115 kubelet[2963]: E0120 02:48:48.714163 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:48:50.720614 kubelet[2963]: E0120 02:48:50.719456 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:48:52.720552 kubelet[2963]: E0120 02:48:52.719727 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:48:54.713322 kubelet[2963]: E0120 02:48:54.713266 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:48:55.730216 kubelet[2963]: E0120 02:48:55.727195 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:48:57.742988 kubelet[2963]: E0120 02:48:57.742661 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:48:58.245648 containerd[1640]: time="2026-01-20T02:48:58.245586308Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:48:58.269871 containerd[1640]: time="2026-01-20T02:48:58.269796422Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 20 02:48:58.278389 containerd[1640]: time="2026-01-20T02:48:58.278174731Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:48:58.307952 containerd[1640]: time="2026-01-20T02:48:58.307828486Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:48:58.312059 containerd[1640]: time="2026-01-20T02:48:58.311829166Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 27.877125313s" Jan 20 02:48:58.320948 containerd[1640]: time="2026-01-20T02:48:58.311962483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 20 02:48:58.378924 containerd[1640]: time="2026-01-20T02:48:58.378316985Z" level=info msg="CreateContainer within sandbox \"fe22b359ac4f72c1a466351f8b644dda9b776debaec609a0d098fc799125cb73\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 02:48:58.526454 containerd[1640]: time="2026-01-20T02:48:58.524606720Z" level=info msg="Container 4653c12fe760f0ddabdfbacae2ccaf1e28b72c8eee095561040a0e5008e3ff76: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:48:58.555373 systemd[1731]: Created slice background.slice - User Background Tasks Slice. Jan 20 02:48:58.606747 systemd[1731]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Jan 20 02:48:58.762423 containerd[1640]: time="2026-01-20T02:48:58.762304175Z" level=info msg="CreateContainer within sandbox \"fe22b359ac4f72c1a466351f8b644dda9b776debaec609a0d098fc799125cb73\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4653c12fe760f0ddabdfbacae2ccaf1e28b72c8eee095561040a0e5008e3ff76\"" Jan 20 02:48:58.772768 containerd[1640]: time="2026-01-20T02:48:58.772682201Z" level=info msg="StartContainer for \"4653c12fe760f0ddabdfbacae2ccaf1e28b72c8eee095561040a0e5008e3ff76\"" Jan 20 02:48:58.857858 systemd[1731]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Jan 20 02:48:58.874251 containerd[1640]: time="2026-01-20T02:48:58.872968921Z" level=info msg="connecting to shim 4653c12fe760f0ddabdfbacae2ccaf1e28b72c8eee095561040a0e5008e3ff76" address="unix:///run/containerd/s/d8ec0a28bcc384d55232d12ce135ed8167c4dec58bcf840747662ab19c1f5f8f" protocol=ttrpc version=3 Jan 20 02:48:59.146709 systemd[1]: Started cri-containerd-4653c12fe760f0ddabdfbacae2ccaf1e28b72c8eee095561040a0e5008e3ff76.scope - libcontainer container 4653c12fe760f0ddabdfbacae2ccaf1e28b72c8eee095561040a0e5008e3ff76. Jan 20 02:48:59.605000 audit: BPF prog-id=169 op=LOAD Jan 20 02:48:59.628377 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 20 02:48:59.628822 kernel: audit: type=1334 audit(1768877339.605:566): prog-id=169 op=LOAD Jan 20 02:48:59.605000 audit[3820]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3481 pid=3820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:59.768436 kernel: audit: type=1300 audit(1768877339.605:566): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3481 pid=3820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:59.768939 kernel: audit: type=1327 audit(1768877339.605:566): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436353363313266653736306630646461626466626163616532636361 Jan 20 02:48:59.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436353363313266653736306630646461626466626163616532636361 Jan 20 02:48:59.769335 kubelet[2963]: E0120 02:48:59.760107 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:48:59.605000 audit: BPF prog-id=170 op=LOAD Jan 20 02:48:59.845334 kernel: audit: type=1334 audit(1768877339.605:567): prog-id=170 op=LOAD Jan 20 02:48:59.845444 kernel: audit: type=1300 audit(1768877339.605:567): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3481 pid=3820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:59.605000 audit[3820]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3481 pid=3820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:59.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436353363313266653736306630646461626466626163616532636361 Jan 20 02:48:59.993325 kernel: audit: type=1327 audit(1768877339.605:567): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436353363313266653736306630646461626466626163616532636361 Jan 20 02:48:59.605000 audit: BPF prog-id=170 op=UNLOAD Jan 20 02:49:00.040161 kernel: audit: type=1334 audit(1768877339.605:568): prog-id=170 op=UNLOAD Jan 20 02:48:59.605000 audit[3820]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3481 pid=3820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:49:00.141731 kernel: audit: type=1300 audit(1768877339.605:568): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3481 pid=3820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:59.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436353363313266653736306630646461626466626163616532636361 Jan 20 02:49:00.258110 kernel: audit: type=1327 audit(1768877339.605:568): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436353363313266653736306630646461626466626163616532636361 Jan 20 02:49:00.264668 kernel: audit: type=1334 audit(1768877339.605:569): prog-id=169 op=UNLOAD Jan 20 02:48:59.605000 audit: BPF prog-id=169 op=UNLOAD Jan 20 02:48:59.605000 audit[3820]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3481 pid=3820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:59.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436353363313266653736306630646461626466626163616532636361 Jan 20 02:48:59.605000 audit: BPF prog-id=171 op=LOAD Jan 20 02:48:59.605000 audit[3820]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3481 pid=3820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:48:59.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436353363313266653736306630646461626466626163616532636361 Jan 20 02:49:00.383005 containerd[1640]: time="2026-01-20T02:49:00.381941748Z" level=info msg="StartContainer for \"4653c12fe760f0ddabdfbacae2ccaf1e28b72c8eee095561040a0e5008e3ff76\" returns successfully" Jan 20 02:49:01.033415 kubelet[2963]: E0120 02:49:01.032813 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:49:01.530982 kubelet[2963]: E0120 02:49:01.530256 2963 kubelet_node_status.go:398] "Node not becoming ready in time after startup" Jan 20 02:49:01.716688 kubelet[2963]: E0120 02:49:01.714274 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:49:02.066872 kubelet[2963]: E0120 02:49:02.063646 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:49:03.787357 kubelet[2963]: E0120 02:49:03.786747 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:49:04.491129 kubelet[2963]: E0120 02:49:04.490859 2963 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:49:05.734750 kubelet[2963]: E0120 02:49:05.733105 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:49:07.501536 systemd[1]: cri-containerd-4653c12fe760f0ddabdfbacae2ccaf1e28b72c8eee095561040a0e5008e3ff76.scope: Deactivated successfully. Jan 20 02:49:07.502086 systemd[1]: cri-containerd-4653c12fe760f0ddabdfbacae2ccaf1e28b72c8eee095561040a0e5008e3ff76.scope: Consumed 2.309s CPU time, 185.1M memory peak, 3.5M read from disk, 171.3M written to disk. Jan 20 02:49:07.540621 kernel: kauditd_printk_skb: 5 callbacks suppressed Jan 20 02:49:07.541140 kernel: audit: type=1334 audit(1768877347.531:571): prog-id=171 op=UNLOAD Jan 20 02:49:07.531000 audit: BPF prog-id=171 op=UNLOAD Jan 20 02:49:07.597690 containerd[1640]: time="2026-01-20T02:49:07.597402198Z" level=info msg="received container exit event container_id:\"4653c12fe760f0ddabdfbacae2ccaf1e28b72c8eee095561040a0e5008e3ff76\" id:\"4653c12fe760f0ddabdfbacae2ccaf1e28b72c8eee095561040a0e5008e3ff76\" pid:3833 exited_at:{seconds:1768877347 nanos:517791657}" Jan 20 02:49:07.752324 kubelet[2963]: E0120 02:49:07.752032 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:49:08.010638 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4653c12fe760f0ddabdfbacae2ccaf1e28b72c8eee095561040a0e5008e3ff76-rootfs.mount: Deactivated successfully. Jan 20 02:49:09.537337 kubelet[2963]: E0120 02:49:09.531010 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:49:09.551035 containerd[1640]: time="2026-01-20T02:49:09.550975849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 20 02:49:09.833338 systemd[1]: Created slice kubepods-besteffort-pod2beb3373_3a79_403b_953d_80d6dc35b793.slice - libcontainer container kubepods-besteffort-pod2beb3373_3a79_403b_953d_80d6dc35b793.slice. Jan 20 02:49:09.921801 containerd[1640]: time="2026-01-20T02:49:09.908273423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zb7gt,Uid:2beb3373-3a79-403b-953d-80d6dc35b793,Namespace:calico-system,Attempt:0,}" Jan 20 02:49:12.061235 containerd[1640]: time="2026-01-20T02:49:12.060983679Z" level=error msg="Failed to destroy network for sandbox \"f9020bdeab6e46bcec2d3186a4213e9ea5341cb48f93df42302f95bf72fadfdd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:12.093303 containerd[1640]: time="2026-01-20T02:49:12.092410027Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zb7gt,Uid:2beb3373-3a79-403b-953d-80d6dc35b793,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9020bdeab6e46bcec2d3186a4213e9ea5341cb48f93df42302f95bf72fadfdd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:12.093644 kubelet[2963]: E0120 02:49:12.092992 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9020bdeab6e46bcec2d3186a4213e9ea5341cb48f93df42302f95bf72fadfdd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:12.093644 kubelet[2963]: E0120 02:49:12.093077 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9020bdeab6e46bcec2d3186a4213e9ea5341cb48f93df42302f95bf72fadfdd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zb7gt" Jan 20 02:49:12.093644 kubelet[2963]: E0120 02:49:12.093104 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9020bdeab6e46bcec2d3186a4213e9ea5341cb48f93df42302f95bf72fadfdd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zb7gt" Jan 20 02:49:12.104036 kubelet[2963]: E0120 02:49:12.093209 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zb7gt_calico-system(2beb3373-3a79-403b-953d-80d6dc35b793)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zb7gt_calico-system(2beb3373-3a79-403b-953d-80d6dc35b793)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9020bdeab6e46bcec2d3186a4213e9ea5341cb48f93df42302f95bf72fadfdd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:49:12.102044 systemd[1]: run-netns-cni\x2dbed3af3f\x2d42ed\x2d21a3\x2db1b1\x2ddf91005139d2.mount: Deactivated successfully. Jan 20 02:49:18.898062 kubelet[2963]: I0120 02:49:18.896557 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrb65\" (UniqueName: \"kubernetes.io/projected/2048147f-559b-4756-8896-b644ce0ae95e-kube-api-access-xrb65\") pod \"goldmane-7c778bb748-5hks8\" (UID: \"2048147f-559b-4756-8896-b644ce0ae95e\") " pod="calico-system/goldmane-7c778bb748-5hks8" Jan 20 02:49:18.929362 kubelet[2963]: I0120 02:49:18.915967 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dzjc\" (UniqueName: \"kubernetes.io/projected/9eab50e8-9c7c-4942-9bf1-628e8f6481c8-kube-api-access-8dzjc\") pod \"calico-kube-controllers-554b6967f8-4mv9r\" (UID: \"9eab50e8-9c7c-4942-9bf1-628e8f6481c8\") " pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" Jan 20 02:49:18.929362 kubelet[2963]: I0120 02:49:18.916110 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2048147f-559b-4756-8896-b644ce0ae95e-config\") pod \"goldmane-7c778bb748-5hks8\" (UID: \"2048147f-559b-4756-8896-b644ce0ae95e\") " pod="calico-system/goldmane-7c778bb748-5hks8" Jan 20 02:49:18.929362 kubelet[2963]: I0120 02:49:18.916441 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9eab50e8-9c7c-4942-9bf1-628e8f6481c8-tigera-ca-bundle\") pod \"calico-kube-controllers-554b6967f8-4mv9r\" (UID: \"9eab50e8-9c7c-4942-9bf1-628e8f6481c8\") " pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" Jan 20 02:49:18.929362 kubelet[2963]: I0120 02:49:18.916575 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/2048147f-559b-4756-8896-b644ce0ae95e-goldmane-key-pair\") pod \"goldmane-7c778bb748-5hks8\" (UID: \"2048147f-559b-4756-8896-b644ce0ae95e\") " pod="calico-system/goldmane-7c778bb748-5hks8" Jan 20 02:49:18.929362 kubelet[2963]: I0120 02:49:18.916697 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2048147f-559b-4756-8896-b644ce0ae95e-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-5hks8\" (UID: \"2048147f-559b-4756-8896-b644ce0ae95e\") " pod="calico-system/goldmane-7c778bb748-5hks8" Jan 20 02:49:19.179624 systemd[1]: Created slice kubepods-besteffort-pod2048147f_559b_4756_8896_b644ce0ae95e.slice - libcontainer container kubepods-besteffort-pod2048147f_559b_4756_8896_b644ce0ae95e.slice. Jan 20 02:49:19.257425 kubelet[2963]: I0120 02:49:19.244694 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/78de0405-4f44-497e-8007-519223ee3a61-calico-apiserver-certs\") pod \"calico-apiserver-99b79f8fd-h8mhs\" (UID: \"78de0405-4f44-497e-8007-519223ee3a61\") " pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" Jan 20 02:49:19.257425 kubelet[2963]: I0120 02:49:19.244754 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r9k2\" (UniqueName: \"kubernetes.io/projected/adda9552-df9a-4757-9456-d2fe24c1f167-kube-api-access-2r9k2\") pod \"whisker-5d45576995-lqpp4\" (UID: \"adda9552-df9a-4757-9456-d2fe24c1f167\") " pod="calico-system/whisker-5d45576995-lqpp4" Jan 20 02:49:19.257425 kubelet[2963]: I0120 02:49:19.244787 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8lrh\" (UniqueName: \"kubernetes.io/projected/78de0405-4f44-497e-8007-519223ee3a61-kube-api-access-x8lrh\") pod \"calico-apiserver-99b79f8fd-h8mhs\" (UID: \"78de0405-4f44-497e-8007-519223ee3a61\") " pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" Jan 20 02:49:19.257425 kubelet[2963]: I0120 02:49:19.245021 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adda9552-df9a-4757-9456-d2fe24c1f167-whisker-ca-bundle\") pod \"whisker-5d45576995-lqpp4\" (UID: \"adda9552-df9a-4757-9456-d2fe24c1f167\") " pod="calico-system/whisker-5d45576995-lqpp4" Jan 20 02:49:19.257425 kubelet[2963]: I0120 02:49:19.245053 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/adda9552-df9a-4757-9456-d2fe24c1f167-whisker-backend-key-pair\") pod \"whisker-5d45576995-lqpp4\" (UID: \"adda9552-df9a-4757-9456-d2fe24c1f167\") " pod="calico-system/whisker-5d45576995-lqpp4" Jan 20 02:49:19.330237 systemd[1]: Created slice kubepods-besteffort-pod9eab50e8_9c7c_4942_9bf1_628e8f6481c8.slice - libcontainer container kubepods-besteffort-pod9eab50e8_9c7c_4942_9bf1_628e8f6481c8.slice. Jan 20 02:49:19.367439 kubelet[2963]: I0120 02:49:19.346226 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfdsf\" (UniqueName: \"kubernetes.io/projected/67615726-cef8-44da-a26c-7795f613fcbb-kube-api-access-xfdsf\") pod \"calico-apiserver-99b79f8fd-9fwc6\" (UID: \"67615726-cef8-44da-a26c-7795f613fcbb\") " pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" Jan 20 02:49:19.367439 kubelet[2963]: I0120 02:49:19.346310 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/67615726-cef8-44da-a26c-7795f613fcbb-calico-apiserver-certs\") pod \"calico-apiserver-99b79f8fd-9fwc6\" (UID: \"67615726-cef8-44da-a26c-7795f613fcbb\") " pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" Jan 20 02:49:19.470815 kubelet[2963]: I0120 02:49:19.468842 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74212177-3278-4b1c-8a68-155074b2aa8f-config-volume\") pod \"coredns-66bc5c9577-czxwc\" (UID: \"74212177-3278-4b1c-8a68-155074b2aa8f\") " pod="kube-system/coredns-66bc5c9577-czxwc" Jan 20 02:49:19.470815 kubelet[2963]: I0120 02:49:19.468933 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj4nn\" (UniqueName: \"kubernetes.io/projected/74212177-3278-4b1c-8a68-155074b2aa8f-kube-api-access-vj4nn\") pod \"coredns-66bc5c9577-czxwc\" (UID: \"74212177-3278-4b1c-8a68-155074b2aa8f\") " pod="kube-system/coredns-66bc5c9577-czxwc" Jan 20 02:49:19.712322 systemd[1]: Created slice kubepods-besteffort-podadda9552_df9a_4757_9456_d2fe24c1f167.slice - libcontainer container kubepods-besteffort-podadda9552_df9a_4757_9456_d2fe24c1f167.slice. Jan 20 02:49:19.867465 kubelet[2963]: I0120 02:49:19.864945 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d5a1bc5-63f2-41a2-84b0-e2d2a5e693f8-config-volume\") pod \"coredns-66bc5c9577-xll5r\" (UID: \"1d5a1bc5-63f2-41a2-84b0-e2d2a5e693f8\") " pod="kube-system/coredns-66bc5c9577-xll5r" Jan 20 02:49:19.867465 kubelet[2963]: I0120 02:49:19.865036 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz82z\" (UniqueName: \"kubernetes.io/projected/1d5a1bc5-63f2-41a2-84b0-e2d2a5e693f8-kube-api-access-wz82z\") pod \"coredns-66bc5c9577-xll5r\" (UID: \"1d5a1bc5-63f2-41a2-84b0-e2d2a5e693f8\") " pod="kube-system/coredns-66bc5c9577-xll5r" Jan 20 02:49:19.878259 containerd[1640]: time="2026-01-20T02:49:19.877046517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-554b6967f8-4mv9r,Uid:9eab50e8-9c7c-4942-9bf1-628e8f6481c8,Namespace:calico-system,Attempt:0,}" Jan 20 02:49:19.879356 systemd[1]: Created slice kubepods-besteffort-pod78de0405_4f44_497e_8007_519223ee3a61.slice - libcontainer container kubepods-besteffort-pod78de0405_4f44_497e_8007_519223ee3a61.slice. Jan 20 02:49:19.903445 containerd[1640]: time="2026-01-20T02:49:19.899371705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d45576995-lqpp4,Uid:adda9552-df9a-4757-9456-d2fe24c1f167,Namespace:calico-system,Attempt:0,}" Jan 20 02:49:19.909412 containerd[1640]: time="2026-01-20T02:49:19.909329097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-5hks8,Uid:2048147f-559b-4756-8896-b644ce0ae95e,Namespace:calico-system,Attempt:0,}" Jan 20 02:49:20.279234 containerd[1640]: time="2026-01-20T02:49:20.278794706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99b79f8fd-h8mhs,Uid:78de0405-4f44-497e-8007-519223ee3a61,Namespace:calico-apiserver,Attempt:0,}" Jan 20 02:49:20.498974 systemd[1]: Created slice kubepods-besteffort-pod67615726_cef8_44da_a26c_7795f613fcbb.slice - libcontainer container kubepods-besteffort-pod67615726_cef8_44da_a26c_7795f613fcbb.slice. Jan 20 02:49:20.710446 containerd[1640]: time="2026-01-20T02:49:20.708848605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99b79f8fd-9fwc6,Uid:67615726-cef8-44da-a26c-7795f613fcbb,Namespace:calico-apiserver,Attempt:0,}" Jan 20 02:49:20.861176 systemd[1]: Created slice kubepods-burstable-pod74212177_3278_4b1c_8a68_155074b2aa8f.slice - libcontainer container kubepods-burstable-pod74212177_3278_4b1c_8a68_155074b2aa8f.slice. Jan 20 02:49:20.959420 kubelet[2963]: E0120 02:49:20.957103 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:49:20.986727 containerd[1640]: time="2026-01-20T02:49:20.976627002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-czxwc,Uid:74212177-3278-4b1c-8a68-155074b2aa8f,Namespace:kube-system,Attempt:0,}" Jan 20 02:49:21.001012 systemd[1]: Created slice kubepods-burstable-pod1d5a1bc5_63f2_41a2_84b0_e2d2a5e693f8.slice - libcontainer container kubepods-burstable-pod1d5a1bc5_63f2_41a2_84b0_e2d2a5e693f8.slice. Jan 20 02:49:21.067630 kubelet[2963]: E0120 02:49:21.067582 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:49:21.095585 containerd[1640]: time="2026-01-20T02:49:21.085095703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xll5r,Uid:1d5a1bc5-63f2-41a2-84b0-e2d2a5e693f8,Namespace:kube-system,Attempt:0,}" Jan 20 02:49:21.665254 containerd[1640]: time="2026-01-20T02:49:21.665083761Z" level=error msg="Failed to destroy network for sandbox \"49312798fc5a07d56941b508ef1a3ff94a887aad0b7270b51b85de52f76647c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:21.739880 systemd[1]: run-netns-cni\x2d931ea6dd\x2d60ff\x2d12c4\x2de82f\x2dd0e4849fa878.mount: Deactivated successfully. Jan 20 02:49:22.062656 containerd[1640]: time="2026-01-20T02:49:22.030983534Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-554b6967f8-4mv9r,Uid:9eab50e8-9c7c-4942-9bf1-628e8f6481c8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"49312798fc5a07d56941b508ef1a3ff94a887aad0b7270b51b85de52f76647c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:22.128903 kubelet[2963]: E0120 02:49:22.051555 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49312798fc5a07d56941b508ef1a3ff94a887aad0b7270b51b85de52f76647c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:22.128903 kubelet[2963]: E0120 02:49:22.051631 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49312798fc5a07d56941b508ef1a3ff94a887aad0b7270b51b85de52f76647c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" Jan 20 02:49:22.128903 kubelet[2963]: E0120 02:49:22.051662 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49312798fc5a07d56941b508ef1a3ff94a887aad0b7270b51b85de52f76647c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" Jan 20 02:49:22.155034 kubelet[2963]: E0120 02:49:22.051730 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-554b6967f8-4mv9r_calico-system(9eab50e8-9c7c-4942-9bf1-628e8f6481c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-554b6967f8-4mv9r_calico-system(9eab50e8-9c7c-4942-9bf1-628e8f6481c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"49312798fc5a07d56941b508ef1a3ff94a887aad0b7270b51b85de52f76647c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:49:22.558652 containerd[1640]: time="2026-01-20T02:49:22.551648115Z" level=error msg="Failed to destroy network for sandbox \"b11cb9b3ed411ff9e05dac4b43e347955225a39c9eef853ba094bcd47165f526\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:22.622374 systemd[1]: run-netns-cni\x2d01ac4cdb\x2d02ee\x2d1df9\x2d757a\x2dde7185c0c68e.mount: Deactivated successfully. Jan 20 02:49:22.711989 containerd[1640]: time="2026-01-20T02:49:22.677849400Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d45576995-lqpp4,Uid:adda9552-df9a-4757-9456-d2fe24c1f167,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b11cb9b3ed411ff9e05dac4b43e347955225a39c9eef853ba094bcd47165f526\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:22.711989 containerd[1640]: time="2026-01-20T02:49:22.702860844Z" level=error msg="Failed to destroy network for sandbox \"f8a779e40e7630e4d818490397e01c141e2ccca2e17f0dca405143bceaba4de8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:22.724673 kubelet[2963]: E0120 02:49:22.688185 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b11cb9b3ed411ff9e05dac4b43e347955225a39c9eef853ba094bcd47165f526\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:22.724673 kubelet[2963]: E0120 02:49:22.688408 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b11cb9b3ed411ff9e05dac4b43e347955225a39c9eef853ba094bcd47165f526\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d45576995-lqpp4" Jan 20 02:49:22.724673 kubelet[2963]: E0120 02:49:22.688445 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b11cb9b3ed411ff9e05dac4b43e347955225a39c9eef853ba094bcd47165f526\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d45576995-lqpp4" Jan 20 02:49:22.724852 kubelet[2963]: E0120 02:49:22.688738 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5d45576995-lqpp4_calico-system(adda9552-df9a-4757-9456-d2fe24c1f167)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5d45576995-lqpp4_calico-system(adda9552-df9a-4757-9456-d2fe24c1f167)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b11cb9b3ed411ff9e05dac4b43e347955225a39c9eef853ba094bcd47165f526\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5d45576995-lqpp4" podUID="adda9552-df9a-4757-9456-d2fe24c1f167" Jan 20 02:49:22.790204 systemd[1]: run-netns-cni\x2dfe64042e\x2d5bb1\x2d314c\x2dcb98\x2d576036e0220b.mount: Deactivated successfully. Jan 20 02:49:22.919073 containerd[1640]: time="2026-01-20T02:49:22.918794096Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99b79f8fd-h8mhs,Uid:78de0405-4f44-497e-8007-519223ee3a61,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8a779e40e7630e4d818490397e01c141e2ccca2e17f0dca405143bceaba4de8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:22.962671 containerd[1640]: time="2026-01-20T02:49:22.946169160Z" level=error msg="Failed to destroy network for sandbox \"90f48c54e420d1fa1ab87102983e6582b611a022ff07c0b4c9a2e340e410cce9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:22.962837 kubelet[2963]: E0120 02:49:22.952890 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8a779e40e7630e4d818490397e01c141e2ccca2e17f0dca405143bceaba4de8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:22.962837 kubelet[2963]: E0120 02:49:22.952962 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8a779e40e7630e4d818490397e01c141e2ccca2e17f0dca405143bceaba4de8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" Jan 20 02:49:22.962837 kubelet[2963]: E0120 02:49:22.952990 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8a779e40e7630e4d818490397e01c141e2ccca2e17f0dca405143bceaba4de8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" Jan 20 02:49:22.963000 kubelet[2963]: E0120 02:49:22.953087 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-99b79f8fd-h8mhs_calico-apiserver(78de0405-4f44-497e-8007-519223ee3a61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-99b79f8fd-h8mhs_calico-apiserver(78de0405-4f44-497e-8007-519223ee3a61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f8a779e40e7630e4d818490397e01c141e2ccca2e17f0dca405143bceaba4de8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:49:22.975786 systemd[1]: run-netns-cni\x2dcf37beed\x2d2a5e\x2d1b94\x2dbca5\x2de3625d23ded8.mount: Deactivated successfully. Jan 20 02:49:23.035967 containerd[1640]: time="2026-01-20T02:49:23.035325863Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-5hks8,Uid:2048147f-559b-4756-8896-b644ce0ae95e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"90f48c54e420d1fa1ab87102983e6582b611a022ff07c0b4c9a2e340e410cce9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:23.081318 kubelet[2963]: E0120 02:49:23.067850 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90f48c54e420d1fa1ab87102983e6582b611a022ff07c0b4c9a2e340e410cce9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:23.081318 kubelet[2963]: E0120 02:49:23.067924 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90f48c54e420d1fa1ab87102983e6582b611a022ff07c0b4c9a2e340e410cce9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-5hks8" Jan 20 02:49:23.081318 kubelet[2963]: E0120 02:49:23.067952 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90f48c54e420d1fa1ab87102983e6582b611a022ff07c0b4c9a2e340e410cce9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-5hks8" Jan 20 02:49:23.081636 kubelet[2963]: E0120 02:49:23.068027 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-5hks8_calico-system(2048147f-559b-4756-8896-b644ce0ae95e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-5hks8_calico-system(2048147f-559b-4756-8896-b644ce0ae95e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90f48c54e420d1fa1ab87102983e6582b611a022ff07c0b4c9a2e340e410cce9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:49:23.192225 containerd[1640]: time="2026-01-20T02:49:23.188775467Z" level=error msg="Failed to destroy network for sandbox \"c556a5f04c7d6bb0739608d1485bc55dd9a88905ccd4acaf54cf32436558a79d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:23.280779 systemd[1]: run-netns-cni\x2dc36f11ad\x2d5d42\x2d7a25\x2d9561\x2d65277a61de47.mount: Deactivated successfully. Jan 20 02:49:23.473542 containerd[1640]: time="2026-01-20T02:49:23.471917248Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99b79f8fd-9fwc6,Uid:67615726-cef8-44da-a26c-7795f613fcbb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c556a5f04c7d6bb0739608d1485bc55dd9a88905ccd4acaf54cf32436558a79d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:23.480308 kubelet[2963]: E0120 02:49:23.477952 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c556a5f04c7d6bb0739608d1485bc55dd9a88905ccd4acaf54cf32436558a79d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:23.480308 kubelet[2963]: E0120 02:49:23.478079 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c556a5f04c7d6bb0739608d1485bc55dd9a88905ccd4acaf54cf32436558a79d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" Jan 20 02:49:23.480308 kubelet[2963]: E0120 02:49:23.478109 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c556a5f04c7d6bb0739608d1485bc55dd9a88905ccd4acaf54cf32436558a79d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" Jan 20 02:49:23.484305 kubelet[2963]: E0120 02:49:23.478339 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-99b79f8fd-9fwc6_calico-apiserver(67615726-cef8-44da-a26c-7795f613fcbb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-99b79f8fd-9fwc6_calico-apiserver(67615726-cef8-44da-a26c-7795f613fcbb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c556a5f04c7d6bb0739608d1485bc55dd9a88905ccd4acaf54cf32436558a79d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:49:23.721296 containerd[1640]: time="2026-01-20T02:49:23.720936045Z" level=error msg="Failed to destroy network for sandbox \"3338136c9cdf54acf10a83c7fd47bc7e31963a221d40470d73a07646c3489898\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:23.741052 systemd[1]: run-netns-cni\x2d112eed5b\x2d4ff9\x2db17e\x2db458\x2d3b217ec6cef9.mount: Deactivated successfully. Jan 20 02:49:23.834732 containerd[1640]: time="2026-01-20T02:49:23.834569825Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-czxwc,Uid:74212177-3278-4b1c-8a68-155074b2aa8f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3338136c9cdf54acf10a83c7fd47bc7e31963a221d40470d73a07646c3489898\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:23.838549 kubelet[2963]: E0120 02:49:23.838444 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3338136c9cdf54acf10a83c7fd47bc7e31963a221d40470d73a07646c3489898\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:23.838749 kubelet[2963]: E0120 02:49:23.838719 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3338136c9cdf54acf10a83c7fd47bc7e31963a221d40470d73a07646c3489898\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-czxwc" Jan 20 02:49:23.838860 kubelet[2963]: E0120 02:49:23.838837 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3338136c9cdf54acf10a83c7fd47bc7e31963a221d40470d73a07646c3489898\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-czxwc" Jan 20 02:49:23.845994 kubelet[2963]: E0120 02:49:23.845928 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-czxwc_kube-system(74212177-3278-4b1c-8a68-155074b2aa8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-czxwc_kube-system(74212177-3278-4b1c-8a68-155074b2aa8f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3338136c9cdf54acf10a83c7fd47bc7e31963a221d40470d73a07646c3489898\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-czxwc" podUID="74212177-3278-4b1c-8a68-155074b2aa8f" Jan 20 02:49:23.920935 containerd[1640]: time="2026-01-20T02:49:23.909086354Z" level=error msg="Failed to destroy network for sandbox \"1145cd99ece96213b3146c340391ba62684b4be10a2cffc888c2cc4644316866\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:23.943832 systemd[1]: run-netns-cni\x2d5c486799\x2d593a\x2d6b3b\x2de01d\x2dc704ff340593.mount: Deactivated successfully. Jan 20 02:49:23.997019 containerd[1640]: time="2026-01-20T02:49:23.996941407Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xll5r,Uid:1d5a1bc5-63f2-41a2-84b0-e2d2a5e693f8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1145cd99ece96213b3146c340391ba62684b4be10a2cffc888c2cc4644316866\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:24.002694 kubelet[2963]: E0120 02:49:23.997847 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1145cd99ece96213b3146c340391ba62684b4be10a2cffc888c2cc4644316866\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:24.002694 kubelet[2963]: E0120 02:49:24.002354 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1145cd99ece96213b3146c340391ba62684b4be10a2cffc888c2cc4644316866\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xll5r" Jan 20 02:49:24.003826 kubelet[2963]: E0120 02:49:24.002827 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1145cd99ece96213b3146c340391ba62684b4be10a2cffc888c2cc4644316866\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xll5r" Jan 20 02:49:24.003826 kubelet[2963]: E0120 02:49:24.003467 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-xll5r_kube-system(1d5a1bc5-63f2-41a2-84b0-e2d2a5e693f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-xll5r_kube-system(1d5a1bc5-63f2-41a2-84b0-e2d2a5e693f8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1145cd99ece96213b3146c340391ba62684b4be10a2cffc888c2cc4644316866\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-xll5r" podUID="1d5a1bc5-63f2-41a2-84b0-e2d2a5e693f8" Jan 20 02:49:24.745585 kubelet[2963]: E0120 02:49:24.724966 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:49:26.775426 containerd[1640]: time="2026-01-20T02:49:26.772083159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zb7gt,Uid:2beb3373-3a79-403b-953d-80d6dc35b793,Namespace:calico-system,Attempt:0,}" Jan 20 02:49:28.493590 containerd[1640]: time="2026-01-20T02:49:28.468026181Z" level=error msg="Failed to destroy network for sandbox \"c9290354d1790bbbadd0357bb6964e4579e9b37f9b33d7317911287e141ad5d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:28.523574 systemd[1]: run-netns-cni\x2d26e738d5\x2dfeea\x2dc594\x2de6bc\x2d038e451f7480.mount: Deactivated successfully. Jan 20 02:49:28.544272 containerd[1640]: time="2026-01-20T02:49:28.538948329Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zb7gt,Uid:2beb3373-3a79-403b-953d-80d6dc35b793,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9290354d1790bbbadd0357bb6964e4579e9b37f9b33d7317911287e141ad5d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:28.544573 kubelet[2963]: E0120 02:49:28.543459 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9290354d1790bbbadd0357bb6964e4579e9b37f9b33d7317911287e141ad5d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:28.544573 kubelet[2963]: E0120 02:49:28.543726 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9290354d1790bbbadd0357bb6964e4579e9b37f9b33d7317911287e141ad5d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zb7gt" Jan 20 02:49:28.544573 kubelet[2963]: E0120 02:49:28.543849 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9290354d1790bbbadd0357bb6964e4579e9b37f9b33d7317911287e141ad5d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zb7gt" Jan 20 02:49:28.568292 kubelet[2963]: E0120 02:49:28.544095 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zb7gt_calico-system(2beb3373-3a79-403b-953d-80d6dc35b793)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zb7gt_calico-system(2beb3373-3a79-403b-953d-80d6dc35b793)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c9290354d1790bbbadd0357bb6964e4579e9b37f9b33d7317911287e141ad5d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:49:31.727679 kubelet[2963]: E0120 02:49:31.726843 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:49:34.734577 containerd[1640]: time="2026-01-20T02:49:34.732451563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99b79f8fd-h8mhs,Uid:78de0405-4f44-497e-8007-519223ee3a61,Namespace:calico-apiserver,Attempt:0,}" Jan 20 02:49:34.801320 containerd[1640]: time="2026-01-20T02:49:34.792695923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-5hks8,Uid:2048147f-559b-4756-8896-b644ce0ae95e,Namespace:calico-system,Attempt:0,}" Jan 20 02:49:34.817242 kubelet[2963]: E0120 02:49:34.813337 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:49:34.973399 containerd[1640]: time="2026-01-20T02:49:34.963926951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-czxwc,Uid:74212177-3278-4b1c-8a68-155074b2aa8f,Namespace:kube-system,Attempt:0,}" Jan 20 02:49:35.624867 containerd[1640]: time="2026-01-20T02:49:35.606596424Z" level=error msg="Failed to destroy network for sandbox \"7bcc1457f15357a336467c5cbe07301efd8bb9ecb62a37b6179d4ec037f18e01\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:35.623739 systemd[1]: run-netns-cni\x2d8d4c1b59\x2d3cd2\x2d8ad5\x2d145b\x2d1cedd030de1e.mount: Deactivated successfully. Jan 20 02:49:35.687746 containerd[1640]: time="2026-01-20T02:49:35.683843127Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99b79f8fd-h8mhs,Uid:78de0405-4f44-497e-8007-519223ee3a61,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bcc1457f15357a336467c5cbe07301efd8bb9ecb62a37b6179d4ec037f18e01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:35.687988 kubelet[2963]: E0120 02:49:35.685700 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bcc1457f15357a336467c5cbe07301efd8bb9ecb62a37b6179d4ec037f18e01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:35.687988 kubelet[2963]: E0120 02:49:35.685771 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bcc1457f15357a336467c5cbe07301efd8bb9ecb62a37b6179d4ec037f18e01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" Jan 20 02:49:35.687988 kubelet[2963]: E0120 02:49:35.685804 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bcc1457f15357a336467c5cbe07301efd8bb9ecb62a37b6179d4ec037f18e01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" Jan 20 02:49:35.694117 kubelet[2963]: E0120 02:49:35.685963 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-99b79f8fd-h8mhs_calico-apiserver(78de0405-4f44-497e-8007-519223ee3a61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-99b79f8fd-h8mhs_calico-apiserver(78de0405-4f44-497e-8007-519223ee3a61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7bcc1457f15357a336467c5cbe07301efd8bb9ecb62a37b6179d4ec037f18e01\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:49:35.845339 containerd[1640]: time="2026-01-20T02:49:35.836190670Z" level=error msg="Failed to destroy network for sandbox \"3856fdcc380f09ddc07446580f18a060682b39689ccc7eec376c9350960675ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:35.888324 systemd[1]: run-netns-cni\x2d54320b90\x2d95b0\x2d891b\x2db11c\x2d447f1fcbf88e.mount: Deactivated successfully. Jan 20 02:49:35.986831 containerd[1640]: time="2026-01-20T02:49:35.986762732Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-5hks8,Uid:2048147f-559b-4756-8896-b644ce0ae95e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3856fdcc380f09ddc07446580f18a060682b39689ccc7eec376c9350960675ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:35.995203 kubelet[2963]: E0120 02:49:35.993154 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3856fdcc380f09ddc07446580f18a060682b39689ccc7eec376c9350960675ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:35.995203 kubelet[2963]: E0120 02:49:35.993233 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3856fdcc380f09ddc07446580f18a060682b39689ccc7eec376c9350960675ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-5hks8" Jan 20 02:49:35.995203 kubelet[2963]: E0120 02:49:35.993262 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3856fdcc380f09ddc07446580f18a060682b39689ccc7eec376c9350960675ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-5hks8" Jan 20 02:49:35.995949 kubelet[2963]: E0120 02:49:35.993327 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-5hks8_calico-system(2048147f-559b-4756-8896-b644ce0ae95e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-5hks8_calico-system(2048147f-559b-4756-8896-b644ce0ae95e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3856fdcc380f09ddc07446580f18a060682b39689ccc7eec376c9350960675ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:49:36.388370 containerd[1640]: time="2026-01-20T02:49:36.388308872Z" level=error msg="Failed to destroy network for sandbox \"18e98d45168687e326751be6bd87ed66a9f6d9db9ae65b147a99334fead9acd8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:36.411724 systemd[1]: run-netns-cni\x2d7b546a03\x2d02f8\x2dc27e\x2d0846\x2db7c0140869ac.mount: Deactivated successfully. Jan 20 02:49:36.493098 containerd[1640]: time="2026-01-20T02:49:36.492112051Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-czxwc,Uid:74212177-3278-4b1c-8a68-155074b2aa8f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"18e98d45168687e326751be6bd87ed66a9f6d9db9ae65b147a99334fead9acd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:36.493325 kubelet[2963]: E0120 02:49:36.492400 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18e98d45168687e326751be6bd87ed66a9f6d9db9ae65b147a99334fead9acd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:36.493325 kubelet[2963]: E0120 02:49:36.492460 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18e98d45168687e326751be6bd87ed66a9f6d9db9ae65b147a99334fead9acd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-czxwc" Jan 20 02:49:36.493325 kubelet[2963]: E0120 02:49:36.492570 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18e98d45168687e326751be6bd87ed66a9f6d9db9ae65b147a99334fead9acd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-czxwc" Jan 20 02:49:36.493560 kubelet[2963]: E0120 02:49:36.492633 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-czxwc_kube-system(74212177-3278-4b1c-8a68-155074b2aa8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-czxwc_kube-system(74212177-3278-4b1c-8a68-155074b2aa8f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"18e98d45168687e326751be6bd87ed66a9f6d9db9ae65b147a99334fead9acd8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-czxwc" podUID="74212177-3278-4b1c-8a68-155074b2aa8f" Jan 20 02:49:36.778612 containerd[1640]: time="2026-01-20T02:49:36.777126443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d45576995-lqpp4,Uid:adda9552-df9a-4757-9456-d2fe24c1f167,Namespace:calico-system,Attempt:0,}" Jan 20 02:49:36.825393 containerd[1640]: time="2026-01-20T02:49:36.822042597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-554b6967f8-4mv9r,Uid:9eab50e8-9c7c-4942-9bf1-628e8f6481c8,Namespace:calico-system,Attempt:0,}" Jan 20 02:49:37.718727 kubelet[2963]: E0120 02:49:37.718607 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:49:37.829421 kubelet[2963]: E0120 02:49:37.821626 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:49:37.845981 containerd[1640]: time="2026-01-20T02:49:37.839766740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xll5r,Uid:1d5a1bc5-63f2-41a2-84b0-e2d2a5e693f8,Namespace:kube-system,Attempt:0,}" Jan 20 02:49:37.998824 containerd[1640]: time="2026-01-20T02:49:37.933315676Z" level=error msg="Failed to destroy network for sandbox \"8c1ee62e5e6eba14a55d589131decbd0a72e949e58f8797d7a9796f6ad8fc245\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:37.975287 systemd[1]: run-netns-cni\x2d31ceb1c3\x2df853\x2d7e88\x2df851\x2dae27049947a3.mount: Deactivated successfully. Jan 20 02:49:38.095356 containerd[1640]: time="2026-01-20T02:49:38.091136273Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d45576995-lqpp4,Uid:adda9552-df9a-4757-9456-d2fe24c1f167,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c1ee62e5e6eba14a55d589131decbd0a72e949e58f8797d7a9796f6ad8fc245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:38.095637 kubelet[2963]: E0120 02:49:38.091549 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c1ee62e5e6eba14a55d589131decbd0a72e949e58f8797d7a9796f6ad8fc245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:38.095637 kubelet[2963]: E0120 02:49:38.091631 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c1ee62e5e6eba14a55d589131decbd0a72e949e58f8797d7a9796f6ad8fc245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d45576995-lqpp4" Jan 20 02:49:38.095637 kubelet[2963]: E0120 02:49:38.091661 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c1ee62e5e6eba14a55d589131decbd0a72e949e58f8797d7a9796f6ad8fc245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d45576995-lqpp4" Jan 20 02:49:38.095814 kubelet[2963]: E0120 02:49:38.091730 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5d45576995-lqpp4_calico-system(adda9552-df9a-4757-9456-d2fe24c1f167)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5d45576995-lqpp4_calico-system(adda9552-df9a-4757-9456-d2fe24c1f167)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c1ee62e5e6eba14a55d589131decbd0a72e949e58f8797d7a9796f6ad8fc245\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5d45576995-lqpp4" podUID="adda9552-df9a-4757-9456-d2fe24c1f167" Jan 20 02:49:38.148733 containerd[1640]: time="2026-01-20T02:49:38.126981446Z" level=error msg="Failed to destroy network for sandbox \"2809a1129345ce201bc7ed3c93de1e9e45614861c592262e7bbd3789e4f94bd4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:38.174876 systemd[1]: run-netns-cni\x2d7fb362c7\x2dbe64\x2dfcfc\x2db254\x2d9d22e229d3e1.mount: Deactivated successfully. Jan 20 02:49:38.205780 containerd[1640]: time="2026-01-20T02:49:38.204767873Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-554b6967f8-4mv9r,Uid:9eab50e8-9c7c-4942-9bf1-628e8f6481c8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2809a1129345ce201bc7ed3c93de1e9e45614861c592262e7bbd3789e4f94bd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:38.206007 kubelet[2963]: E0120 02:49:38.205048 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2809a1129345ce201bc7ed3c93de1e9e45614861c592262e7bbd3789e4f94bd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:38.206007 kubelet[2963]: E0120 02:49:38.205159 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2809a1129345ce201bc7ed3c93de1e9e45614861c592262e7bbd3789e4f94bd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" Jan 20 02:49:38.206007 kubelet[2963]: E0120 02:49:38.205187 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2809a1129345ce201bc7ed3c93de1e9e45614861c592262e7bbd3789e4f94bd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" Jan 20 02:49:38.206227 kubelet[2963]: E0120 02:49:38.205248 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-554b6967f8-4mv9r_calico-system(9eab50e8-9c7c-4942-9bf1-628e8f6481c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-554b6967f8-4mv9r_calico-system(9eab50e8-9c7c-4942-9bf1-628e8f6481c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2809a1129345ce201bc7ed3c93de1e9e45614861c592262e7bbd3789e4f94bd4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:49:38.450561 containerd[1640]: time="2026-01-20T02:49:38.449863286Z" level=error msg="Failed to destroy network for sandbox \"78b67f49456e586f3fbb49105e3d5e98d4564d8bac2d7c5ca51be97c22f2e095\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:38.456806 systemd[1]: run-netns-cni\x2d22d5e638\x2dc27d\x2d5bdb\x2de148\x2d425974944e73.mount: Deactivated successfully. Jan 20 02:49:38.470611 kubelet[2963]: E0120 02:49:38.464359 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78b67f49456e586f3fbb49105e3d5e98d4564d8bac2d7c5ca51be97c22f2e095\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:38.470611 kubelet[2963]: E0120 02:49:38.464428 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78b67f49456e586f3fbb49105e3d5e98d4564d8bac2d7c5ca51be97c22f2e095\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xll5r" Jan 20 02:49:38.470611 kubelet[2963]: E0120 02:49:38.464461 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78b67f49456e586f3fbb49105e3d5e98d4564d8bac2d7c5ca51be97c22f2e095\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xll5r" Jan 20 02:49:38.470727 containerd[1640]: time="2026-01-20T02:49:38.463884806Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xll5r,Uid:1d5a1bc5-63f2-41a2-84b0-e2d2a5e693f8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"78b67f49456e586f3fbb49105e3d5e98d4564d8bac2d7c5ca51be97c22f2e095\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:38.470926 kubelet[2963]: E0120 02:49:38.464578 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-xll5r_kube-system(1d5a1bc5-63f2-41a2-84b0-e2d2a5e693f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-xll5r_kube-system(1d5a1bc5-63f2-41a2-84b0-e2d2a5e693f8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78b67f49456e586f3fbb49105e3d5e98d4564d8bac2d7c5ca51be97c22f2e095\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-xll5r" podUID="1d5a1bc5-63f2-41a2-84b0-e2d2a5e693f8" Jan 20 02:49:38.736927 containerd[1640]: time="2026-01-20T02:49:38.735955411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99b79f8fd-9fwc6,Uid:67615726-cef8-44da-a26c-7795f613fcbb,Namespace:calico-apiserver,Attempt:0,}" Jan 20 02:49:39.206908 containerd[1640]: time="2026-01-20T02:49:39.206841142Z" level=error msg="Failed to destroy network for sandbox \"ac16bb102cb89ca30de863959f1702baa6ee2ba00f3a4adfa0f93ca4c77a6073\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:39.230430 systemd[1]: run-netns-cni\x2d191a459b\x2dfdd5\x2d93a6\x2d016a\x2d746c0bd26eb2.mount: Deactivated successfully. Jan 20 02:49:39.241114 containerd[1640]: time="2026-01-20T02:49:39.240819480Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99b79f8fd-9fwc6,Uid:67615726-cef8-44da-a26c-7795f613fcbb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac16bb102cb89ca30de863959f1702baa6ee2ba00f3a4adfa0f93ca4c77a6073\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:39.241728 kubelet[2963]: E0120 02:49:39.241627 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac16bb102cb89ca30de863959f1702baa6ee2ba00f3a4adfa0f93ca4c77a6073\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:39.244464 kubelet[2963]: E0120 02:49:39.243654 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac16bb102cb89ca30de863959f1702baa6ee2ba00f3a4adfa0f93ca4c77a6073\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" Jan 20 02:49:39.244464 kubelet[2963]: E0120 02:49:39.243694 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac16bb102cb89ca30de863959f1702baa6ee2ba00f3a4adfa0f93ca4c77a6073\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" Jan 20 02:49:39.244464 kubelet[2963]: E0120 02:49:39.243768 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-99b79f8fd-9fwc6_calico-apiserver(67615726-cef8-44da-a26c-7795f613fcbb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-99b79f8fd-9fwc6_calico-apiserver(67615726-cef8-44da-a26c-7795f613fcbb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac16bb102cb89ca30de863959f1702baa6ee2ba00f3a4adfa0f93ca4c77a6073\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:49:40.717398 kubelet[2963]: E0120 02:49:40.714622 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:49:41.950212 containerd[1640]: time="2026-01-20T02:49:41.945384655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zb7gt,Uid:2beb3373-3a79-403b-953d-80d6dc35b793,Namespace:calico-system,Attempt:0,}" Jan 20 02:49:42.896959 containerd[1640]: time="2026-01-20T02:49:42.885691288Z" level=error msg="Failed to destroy network for sandbox \"cfecbf9f89d305b2b464a897a9b24799ab4ccc0fdc396d9d76ac016d9d29fe53\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:42.920143 systemd[1]: run-netns-cni\x2d8fddd71b\x2da80c\x2d52bc\x2de7c1\x2d16f24d7758fa.mount: Deactivated successfully. Jan 20 02:49:42.978450 containerd[1640]: time="2026-01-20T02:49:42.978286514Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zb7gt,Uid:2beb3373-3a79-403b-953d-80d6dc35b793,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfecbf9f89d305b2b464a897a9b24799ab4ccc0fdc396d9d76ac016d9d29fe53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:42.987446 kubelet[2963]: E0120 02:49:42.979981 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfecbf9f89d305b2b464a897a9b24799ab4ccc0fdc396d9d76ac016d9d29fe53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:42.987446 kubelet[2963]: E0120 02:49:42.980108 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfecbf9f89d305b2b464a897a9b24799ab4ccc0fdc396d9d76ac016d9d29fe53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zb7gt" Jan 20 02:49:42.987446 kubelet[2963]: E0120 02:49:42.980135 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfecbf9f89d305b2b464a897a9b24799ab4ccc0fdc396d9d76ac016d9d29fe53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zb7gt" Jan 20 02:49:42.994174 kubelet[2963]: E0120 02:49:42.980201 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zb7gt_calico-system(2beb3373-3a79-403b-953d-80d6dc35b793)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zb7gt_calico-system(2beb3373-3a79-403b-953d-80d6dc35b793)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cfecbf9f89d305b2b464a897a9b24799ab4ccc0fdc396d9d76ac016d9d29fe53\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:49:43.722930 kubelet[2963]: E0120 02:49:43.722883 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:49:46.773759 containerd[1640]: time="2026-01-20T02:49:46.768942547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99b79f8fd-h8mhs,Uid:78de0405-4f44-497e-8007-519223ee3a61,Namespace:calico-apiserver,Attempt:0,}" Jan 20 02:49:47.765354 containerd[1640]: time="2026-01-20T02:49:47.731594115Z" level=error msg="Failed to destroy network for sandbox \"427d578b32e6af48c8822b51c4a1a13478d56a2db2d09e92674fc189626fafc3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:47.791172 systemd[1]: run-netns-cni\x2d4f816030\x2d29fa\x2d3d52\x2d0517\x2dd4bb1ad84575.mount: Deactivated successfully. Jan 20 02:49:47.876774 containerd[1640]: time="2026-01-20T02:49:47.876706365Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99b79f8fd-h8mhs,Uid:78de0405-4f44-497e-8007-519223ee3a61,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"427d578b32e6af48c8822b51c4a1a13478d56a2db2d09e92674fc189626fafc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:47.887423 kubelet[2963]: E0120 02:49:47.886075 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"427d578b32e6af48c8822b51c4a1a13478d56a2db2d09e92674fc189626fafc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:47.887423 kubelet[2963]: E0120 02:49:47.886167 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"427d578b32e6af48c8822b51c4a1a13478d56a2db2d09e92674fc189626fafc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" Jan 20 02:49:47.887423 kubelet[2963]: E0120 02:49:47.886195 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"427d578b32e6af48c8822b51c4a1a13478d56a2db2d09e92674fc189626fafc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" Jan 20 02:49:47.888191 kubelet[2963]: E0120 02:49:47.886262 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-99b79f8fd-h8mhs_calico-apiserver(78de0405-4f44-497e-8007-519223ee3a61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-99b79f8fd-h8mhs_calico-apiserver(78de0405-4f44-497e-8007-519223ee3a61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"427d578b32e6af48c8822b51c4a1a13478d56a2db2d09e92674fc189626fafc3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:49:50.799293 kubelet[2963]: E0120 02:49:50.787821 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:49:50.836781 containerd[1640]: time="2026-01-20T02:49:50.827448092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-czxwc,Uid:74212177-3278-4b1c-8a68-155074b2aa8f,Namespace:kube-system,Attempt:0,}" Jan 20 02:49:50.844061 containerd[1640]: time="2026-01-20T02:49:50.843801160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-5hks8,Uid:2048147f-559b-4756-8896-b644ce0ae95e,Namespace:calico-system,Attempt:0,}" Jan 20 02:49:52.349321 containerd[1640]: time="2026-01-20T02:49:52.311255590Z" level=error msg="Failed to destroy network for sandbox \"90607f528ae846e6eff7e88607a2928fb6fbf57471accaf69d353726d238b1c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:52.382623 systemd[1]: run-netns-cni\x2da0561b89\x2d7f7d\x2dae92\x2d6d9f\x2dc626f88dc26c.mount: Deactivated successfully. Jan 20 02:49:52.443206 containerd[1640]: time="2026-01-20T02:49:52.425525688Z" level=error msg="Failed to destroy network for sandbox \"cfb06030ab6931e88065a046d91c96d421f74d6a6adf5b39b40bd8b52c05fff4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:52.488849 systemd[1]: run-netns-cni\x2dfdca3531\x2dc2df\x2de87f\x2d6259\x2d35336c169841.mount: Deactivated successfully. Jan 20 02:49:52.615933 containerd[1640]: time="2026-01-20T02:49:52.614284303Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-5hks8,Uid:2048147f-559b-4756-8896-b644ce0ae95e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"90607f528ae846e6eff7e88607a2928fb6fbf57471accaf69d353726d238b1c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:52.626331 kubelet[2963]: E0120 02:49:52.624035 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90607f528ae846e6eff7e88607a2928fb6fbf57471accaf69d353726d238b1c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:52.626331 kubelet[2963]: E0120 02:49:52.624114 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90607f528ae846e6eff7e88607a2928fb6fbf57471accaf69d353726d238b1c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-5hks8" Jan 20 02:49:52.626331 kubelet[2963]: E0120 02:49:52.624144 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90607f528ae846e6eff7e88607a2928fb6fbf57471accaf69d353726d238b1c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-5hks8" Jan 20 02:49:52.644258 kubelet[2963]: E0120 02:49:52.624212 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-5hks8_calico-system(2048147f-559b-4756-8896-b644ce0ae95e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-5hks8_calico-system(2048147f-559b-4756-8896-b644ce0ae95e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90607f528ae846e6eff7e88607a2928fb6fbf57471accaf69d353726d238b1c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:49:52.693931 containerd[1640]: time="2026-01-20T02:49:52.693857649Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-czxwc,Uid:74212177-3278-4b1c-8a68-155074b2aa8f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfb06030ab6931e88065a046d91c96d421f74d6a6adf5b39b40bd8b52c05fff4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:52.700885 kubelet[2963]: E0120 02:49:52.700828 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfb06030ab6931e88065a046d91c96d421f74d6a6adf5b39b40bd8b52c05fff4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:52.701170 kubelet[2963]: E0120 02:49:52.701140 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfb06030ab6931e88065a046d91c96d421f74d6a6adf5b39b40bd8b52c05fff4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-czxwc" Jan 20 02:49:52.706887 kubelet[2963]: E0120 02:49:52.701276 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfb06030ab6931e88065a046d91c96d421f74d6a6adf5b39b40bd8b52c05fff4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-czxwc" Jan 20 02:49:52.706887 kubelet[2963]: E0120 02:49:52.701386 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-czxwc_kube-system(74212177-3278-4b1c-8a68-155074b2aa8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-czxwc_kube-system(74212177-3278-4b1c-8a68-155074b2aa8f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cfb06030ab6931e88065a046d91c96d421f74d6a6adf5b39b40bd8b52c05fff4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-czxwc" podUID="74212177-3278-4b1c-8a68-155074b2aa8f" Jan 20 02:49:52.769440 containerd[1640]: time="2026-01-20T02:49:52.767106107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-554b6967f8-4mv9r,Uid:9eab50e8-9c7c-4942-9bf1-628e8f6481c8,Namespace:calico-system,Attempt:0,}" Jan 20 02:49:52.862283 kubelet[2963]: E0120 02:49:52.856911 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:49:52.907271 containerd[1640]: time="2026-01-20T02:49:52.888886841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d45576995-lqpp4,Uid:adda9552-df9a-4757-9456-d2fe24c1f167,Namespace:calico-system,Attempt:0,}" Jan 20 02:49:52.907271 containerd[1640]: time="2026-01-20T02:49:52.898179307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xll5r,Uid:1d5a1bc5-63f2-41a2-84b0-e2d2a5e693f8,Namespace:kube-system,Attempt:0,}" Jan 20 02:49:53.873240 containerd[1640]: time="2026-01-20T02:49:53.873186567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99b79f8fd-9fwc6,Uid:67615726-cef8-44da-a26c-7795f613fcbb,Namespace:calico-apiserver,Attempt:0,}" Jan 20 02:49:54.859141 containerd[1640]: time="2026-01-20T02:49:54.847426665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zb7gt,Uid:2beb3373-3a79-403b-953d-80d6dc35b793,Namespace:calico-system,Attempt:0,}" Jan 20 02:49:56.647048 containerd[1640]: time="2026-01-20T02:49:56.645361059Z" level=error msg="Failed to destroy network for sandbox \"26b14226fae37e6d29f7284c54dc4f874b603b5e7ccdaff0527872b554ba01ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:56.698907 systemd[1]: run-netns-cni\x2deff4b21e\x2d2752\x2d90d7\x2d5209\x2d38566e820c7c.mount: Deactivated successfully. Jan 20 02:49:56.810323 containerd[1640]: time="2026-01-20T02:49:56.807914384Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-554b6967f8-4mv9r,Uid:9eab50e8-9c7c-4942-9bf1-628e8f6481c8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"26b14226fae37e6d29f7284c54dc4f874b603b5e7ccdaff0527872b554ba01ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:56.815937 kubelet[2963]: E0120 02:49:56.815742 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26b14226fae37e6d29f7284c54dc4f874b603b5e7ccdaff0527872b554ba01ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:56.815937 kubelet[2963]: E0120 02:49:56.815839 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26b14226fae37e6d29f7284c54dc4f874b603b5e7ccdaff0527872b554ba01ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" Jan 20 02:49:56.815937 kubelet[2963]: E0120 02:49:56.815872 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26b14226fae37e6d29f7284c54dc4f874b603b5e7ccdaff0527872b554ba01ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" Jan 20 02:49:56.816723 kubelet[2963]: E0120 02:49:56.815937 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-554b6967f8-4mv9r_calico-system(9eab50e8-9c7c-4942-9bf1-628e8f6481c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-554b6967f8-4mv9r_calico-system(9eab50e8-9c7c-4942-9bf1-628e8f6481c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26b14226fae37e6d29f7284c54dc4f874b603b5e7ccdaff0527872b554ba01ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:49:56.927191 containerd[1640]: time="2026-01-20T02:49:56.926236300Z" level=error msg="Failed to destroy network for sandbox \"968abeb5dfd1258e1728b5a880e17fe5f76448c4e9d42aacd6536fc566d7659e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:57.010639 systemd[1]: run-netns-cni\x2d10ea5cad\x2db5d7\x2d97e7\x2d0a68\x2dd9da71d53dbf.mount: Deactivated successfully. Jan 20 02:49:57.020435 containerd[1640]: time="2026-01-20T02:49:57.015616369Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xll5r,Uid:1d5a1bc5-63f2-41a2-84b0-e2d2a5e693f8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"968abeb5dfd1258e1728b5a880e17fe5f76448c4e9d42aacd6536fc566d7659e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:57.025071 kubelet[2963]: E0120 02:49:57.018679 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"968abeb5dfd1258e1728b5a880e17fe5f76448c4e9d42aacd6536fc566d7659e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:57.025071 kubelet[2963]: E0120 02:49:57.018919 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"968abeb5dfd1258e1728b5a880e17fe5f76448c4e9d42aacd6536fc566d7659e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xll5r" Jan 20 02:49:57.025071 kubelet[2963]: E0120 02:49:57.019053 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"968abeb5dfd1258e1728b5a880e17fe5f76448c4e9d42aacd6536fc566d7659e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xll5r" Jan 20 02:49:57.025330 kubelet[2963]: E0120 02:49:57.019323 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-xll5r_kube-system(1d5a1bc5-63f2-41a2-84b0-e2d2a5e693f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-xll5r_kube-system(1d5a1bc5-63f2-41a2-84b0-e2d2a5e693f8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"968abeb5dfd1258e1728b5a880e17fe5f76448c4e9d42aacd6536fc566d7659e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-xll5r" podUID="1d5a1bc5-63f2-41a2-84b0-e2d2a5e693f8" Jan 20 02:49:57.381379 containerd[1640]: time="2026-01-20T02:49:57.377924472Z" level=error msg="Failed to destroy network for sandbox \"2c3d23a98dc2e0a44072bae9a75fa924016001e0b9443fa9d867eea3c812f1f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:57.450234 systemd[1]: run-netns-cni\x2d407c66a5\x2dd9c1\x2d0640\x2d9d4e\x2d8fb64f0987cb.mount: Deactivated successfully. Jan 20 02:49:57.686877 containerd[1640]: time="2026-01-20T02:49:57.670132404Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d45576995-lqpp4,Uid:adda9552-df9a-4757-9456-d2fe24c1f167,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c3d23a98dc2e0a44072bae9a75fa924016001e0b9443fa9d867eea3c812f1f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:57.689798 kubelet[2963]: E0120 02:49:57.670454 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c3d23a98dc2e0a44072bae9a75fa924016001e0b9443fa9d867eea3c812f1f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:57.689798 kubelet[2963]: E0120 02:49:57.674211 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c3d23a98dc2e0a44072bae9a75fa924016001e0b9443fa9d867eea3c812f1f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d45576995-lqpp4" Jan 20 02:49:57.689798 kubelet[2963]: E0120 02:49:57.674434 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c3d23a98dc2e0a44072bae9a75fa924016001e0b9443fa9d867eea3c812f1f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d45576995-lqpp4" Jan 20 02:49:57.690064 kubelet[2963]: E0120 02:49:57.683799 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5d45576995-lqpp4_calico-system(adda9552-df9a-4757-9456-d2fe24c1f167)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5d45576995-lqpp4_calico-system(adda9552-df9a-4757-9456-d2fe24c1f167)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c3d23a98dc2e0a44072bae9a75fa924016001e0b9443fa9d867eea3c812f1f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5d45576995-lqpp4" podUID="adda9552-df9a-4757-9456-d2fe24c1f167" Jan 20 02:49:57.735574 containerd[1640]: time="2026-01-20T02:49:57.690744534Z" level=error msg="Failed to destroy network for sandbox \"08ed8578baaecd3e70168f37953db6c6dfa94a0b9bde190c1aa04731a045a5b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:57.710806 systemd[1]: run-netns-cni\x2d255aafca\x2ddecc\x2db7fd\x2da5eb\x2dbb5a2c54dc3e.mount: Deactivated successfully. Jan 20 02:49:57.922216 containerd[1640]: time="2026-01-20T02:49:57.917564720Z" level=error msg="Failed to destroy network for sandbox \"64bd19f57fcf4f2e11557f193492df90379d176e846a781b77c0356aa2ae8c7d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:58.000250 containerd[1640]: time="2026-01-20T02:49:57.969256147Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99b79f8fd-9fwc6,Uid:67615726-cef8-44da-a26c-7795f613fcbb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"08ed8578baaecd3e70168f37953db6c6dfa94a0b9bde190c1aa04731a045a5b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:58.000538 kubelet[2963]: E0120 02:49:57.976580 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08ed8578baaecd3e70168f37953db6c6dfa94a0b9bde190c1aa04731a045a5b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:58.000538 kubelet[2963]: E0120 02:49:57.991433 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08ed8578baaecd3e70168f37953db6c6dfa94a0b9bde190c1aa04731a045a5b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" Jan 20 02:49:58.000538 kubelet[2963]: E0120 02:49:57.991571 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08ed8578baaecd3e70168f37953db6c6dfa94a0b9bde190c1aa04731a045a5b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" Jan 20 02:49:57.966739 systemd[1]: run-netns-cni\x2d3bc43d36\x2d94c5\x2d6b25\x2da451\x2d692cddcbbb1d.mount: Deactivated successfully. Jan 20 02:49:58.005393 kubelet[2963]: E0120 02:49:58.001321 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-99b79f8fd-9fwc6_calico-apiserver(67615726-cef8-44da-a26c-7795f613fcbb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-99b79f8fd-9fwc6_calico-apiserver(67615726-cef8-44da-a26c-7795f613fcbb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"08ed8578baaecd3e70168f37953db6c6dfa94a0b9bde190c1aa04731a045a5b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:49:58.083547 containerd[1640]: time="2026-01-20T02:49:58.071028741Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zb7gt,Uid:2beb3373-3a79-403b-953d-80d6dc35b793,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"64bd19f57fcf4f2e11557f193492df90379d176e846a781b77c0356aa2ae8c7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:58.095241 kubelet[2963]: E0120 02:49:58.095186 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64bd19f57fcf4f2e11557f193492df90379d176e846a781b77c0356aa2ae8c7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:58.095588 kubelet[2963]: E0120 02:49:58.095559 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64bd19f57fcf4f2e11557f193492df90379d176e846a781b77c0356aa2ae8c7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zb7gt" Jan 20 02:49:58.095727 kubelet[2963]: E0120 02:49:58.095704 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64bd19f57fcf4f2e11557f193492df90379d176e846a781b77c0356aa2ae8c7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zb7gt" Jan 20 02:49:58.095867 kubelet[2963]: E0120 02:49:58.095839 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zb7gt_calico-system(2beb3373-3a79-403b-953d-80d6dc35b793)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zb7gt_calico-system(2beb3373-3a79-403b-953d-80d6dc35b793)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64bd19f57fcf4f2e11557f193492df90379d176e846a781b77c0356aa2ae8c7d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:49:58.780666 containerd[1640]: time="2026-01-20T02:49:58.774098875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99b79f8fd-h8mhs,Uid:78de0405-4f44-497e-8007-519223ee3a61,Namespace:calico-apiserver,Attempt:0,}" Jan 20 02:49:59.668217 containerd[1640]: time="2026-01-20T02:49:59.658863777Z" level=error msg="Failed to destroy network for sandbox \"b2d44586aee2e77680de05ce902594047d283bea7bd7692aeb25b2bfd9affa5a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:59.691898 systemd[1]: run-netns-cni\x2d9fe348cc\x2d836a\x2d6a66\x2d750f\x2d7d9e9752a37b.mount: Deactivated successfully. Jan 20 02:49:59.730076 kubelet[2963]: E0120 02:49:59.712699 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2d44586aee2e77680de05ce902594047d283bea7bd7692aeb25b2bfd9affa5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:59.730076 kubelet[2963]: E0120 02:49:59.712773 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2d44586aee2e77680de05ce902594047d283bea7bd7692aeb25b2bfd9affa5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" Jan 20 02:49:59.730076 kubelet[2963]: E0120 02:49:59.712796 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2d44586aee2e77680de05ce902594047d283bea7bd7692aeb25b2bfd9affa5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" Jan 20 02:49:59.730612 containerd[1640]: time="2026-01-20T02:49:59.706638293Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99b79f8fd-h8mhs,Uid:78de0405-4f44-497e-8007-519223ee3a61,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2d44586aee2e77680de05ce902594047d283bea7bd7692aeb25b2bfd9affa5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:49:59.730762 kubelet[2963]: E0120 02:49:59.712858 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-99b79f8fd-h8mhs_calico-apiserver(78de0405-4f44-497e-8007-519223ee3a61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-99b79f8fd-h8mhs_calico-apiserver(78de0405-4f44-497e-8007-519223ee3a61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2d44586aee2e77680de05ce902594047d283bea7bd7692aeb25b2bfd9affa5a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:50:04.740733 kubelet[2963]: E0120 02:50:04.740685 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:04.756559 containerd[1640]: time="2026-01-20T02:50:04.756338914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-czxwc,Uid:74212177-3278-4b1c-8a68-155074b2aa8f,Namespace:kube-system,Attempt:0,}" Jan 20 02:50:06.657124 containerd[1640]: time="2026-01-20T02:50:06.640306861Z" level=error msg="Failed to destroy network for sandbox \"72ee2d8025603927bc6d6ce9c9df395b4cce04d71d3933dcc255c5c453b1038c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:06.673989 systemd[1]: run-netns-cni\x2da9413be8\x2dee4a\x2d0b36\x2d0f59\x2dada0a274b9d6.mount: Deactivated successfully. Jan 20 02:50:07.023238 containerd[1640]: time="2026-01-20T02:50:07.017393746Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-czxwc,Uid:74212177-3278-4b1c-8a68-155074b2aa8f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"72ee2d8025603927bc6d6ce9c9df395b4cce04d71d3933dcc255c5c453b1038c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:07.023238 containerd[1640]: time="2026-01-20T02:50:07.017878715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-5hks8,Uid:2048147f-559b-4756-8896-b644ce0ae95e,Namespace:calico-system,Attempt:0,}" Jan 20 02:50:07.027688 kubelet[2963]: E0120 02:50:07.027635 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72ee2d8025603927bc6d6ce9c9df395b4cce04d71d3933dcc255c5c453b1038c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:07.028284 kubelet[2963]: E0120 02:50:07.028250 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72ee2d8025603927bc6d6ce9c9df395b4cce04d71d3933dcc255c5c453b1038c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-czxwc" Jan 20 02:50:07.028406 kubelet[2963]: E0120 02:50:07.028381 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72ee2d8025603927bc6d6ce9c9df395b4cce04d71d3933dcc255c5c453b1038c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-czxwc" Jan 20 02:50:07.028668 kubelet[2963]: E0120 02:50:07.028621 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-czxwc_kube-system(74212177-3278-4b1c-8a68-155074b2aa8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-czxwc_kube-system(74212177-3278-4b1c-8a68-155074b2aa8f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72ee2d8025603927bc6d6ce9c9df395b4cce04d71d3933dcc255c5c453b1038c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-czxwc" podUID="74212177-3278-4b1c-8a68-155074b2aa8f" Jan 20 02:50:07.820661 containerd[1640]: time="2026-01-20T02:50:07.820599116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-554b6967f8-4mv9r,Uid:9eab50e8-9c7c-4942-9bf1-628e8f6481c8,Namespace:calico-system,Attempt:0,}" Jan 20 02:50:07.851418 kubelet[2963]: E0120 02:50:07.849366 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:07.869098 containerd[1640]: time="2026-01-20T02:50:07.863676378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xll5r,Uid:1d5a1bc5-63f2-41a2-84b0-e2d2a5e693f8,Namespace:kube-system,Attempt:0,}" Jan 20 02:50:09.317443 containerd[1640]: time="2026-01-20T02:50:09.287190433Z" level=error msg="Failed to destroy network for sandbox \"47754110d2064a942ba4d7744f257710a2c7be483cabed6fdd8997582e055913\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:09.299859 systemd[1]: run-netns-cni\x2d686ece67\x2dfd4d\x2d8418\x2d10c8\x2d886637c4d9e4.mount: Deactivated successfully. Jan 20 02:50:09.381627 containerd[1640]: time="2026-01-20T02:50:09.369661905Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-5hks8,Uid:2048147f-559b-4756-8896-b644ce0ae95e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"47754110d2064a942ba4d7744f257710a2c7be483cabed6fdd8997582e055913\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:09.396138 kubelet[2963]: E0120 02:50:09.372742 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47754110d2064a942ba4d7744f257710a2c7be483cabed6fdd8997582e055913\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:09.396138 kubelet[2963]: E0120 02:50:09.373044 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47754110d2064a942ba4d7744f257710a2c7be483cabed6fdd8997582e055913\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-5hks8" Jan 20 02:50:09.396138 kubelet[2963]: E0120 02:50:09.378577 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47754110d2064a942ba4d7744f257710a2c7be483cabed6fdd8997582e055913\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-5hks8" Jan 20 02:50:09.396890 kubelet[2963]: E0120 02:50:09.378828 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-5hks8_calico-system(2048147f-559b-4756-8896-b644ce0ae95e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-5hks8_calico-system(2048147f-559b-4756-8896-b644ce0ae95e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"47754110d2064a942ba4d7744f257710a2c7be483cabed6fdd8997582e055913\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:50:09.982677 containerd[1640]: time="2026-01-20T02:50:09.979964706Z" level=error msg="Failed to destroy network for sandbox \"98824ced2291bcae58418b06792e0a438933aaf35f03f14e2c5bd3c96ce2e4f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:09.998644 containerd[1640]: time="2026-01-20T02:50:09.998378326Z" level=error msg="Failed to destroy network for sandbox \"102a929e471b60b511f0af9abad265daf00ce88d52fde6ad8626df3b6479abfb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:10.009445 systemd[1]: run-netns-cni\x2d3002a9bb\x2df07d\x2d32a0\x2dca33\x2d4cfd6f421efa.mount: Deactivated successfully. Jan 20 02:50:10.019234 containerd[1640]: time="2026-01-20T02:50:10.019171974Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-554b6967f8-4mv9r,Uid:9eab50e8-9c7c-4942-9bf1-628e8f6481c8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"102a929e471b60b511f0af9abad265daf00ce88d52fde6ad8626df3b6479abfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:10.043804 systemd[1]: run-netns-cni\x2da4557fdc\x2d4f71\x2d3d09\x2dcaeb\x2d4d110d6b7092.mount: Deactivated successfully. Jan 20 02:50:10.085746 kubelet[2963]: E0120 02:50:10.085335 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"102a929e471b60b511f0af9abad265daf00ce88d52fde6ad8626df3b6479abfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:10.085746 kubelet[2963]: E0120 02:50:10.085424 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"102a929e471b60b511f0af9abad265daf00ce88d52fde6ad8626df3b6479abfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" Jan 20 02:50:10.085746 kubelet[2963]: E0120 02:50:10.085457 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"102a929e471b60b511f0af9abad265daf00ce88d52fde6ad8626df3b6479abfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" Jan 20 02:50:10.099320 kubelet[2963]: E0120 02:50:10.094752 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-554b6967f8-4mv9r_calico-system(9eab50e8-9c7c-4942-9bf1-628e8f6481c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-554b6967f8-4mv9r_calico-system(9eab50e8-9c7c-4942-9bf1-628e8f6481c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"102a929e471b60b511f0af9abad265daf00ce88d52fde6ad8626df3b6479abfb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:50:10.110080 containerd[1640]: time="2026-01-20T02:50:10.107629909Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xll5r,Uid:1d5a1bc5-63f2-41a2-84b0-e2d2a5e693f8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"98824ced2291bcae58418b06792e0a438933aaf35f03f14e2c5bd3c96ce2e4f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:10.121985 kubelet[2963]: E0120 02:50:10.120629 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98824ced2291bcae58418b06792e0a438933aaf35f03f14e2c5bd3c96ce2e4f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:10.121985 kubelet[2963]: E0120 02:50:10.120765 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98824ced2291bcae58418b06792e0a438933aaf35f03f14e2c5bd3c96ce2e4f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xll5r" Jan 20 02:50:10.121985 kubelet[2963]: E0120 02:50:10.120831 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98824ced2291bcae58418b06792e0a438933aaf35f03f14e2c5bd3c96ce2e4f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xll5r" Jan 20 02:50:10.122243 kubelet[2963]: E0120 02:50:10.120942 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-xll5r_kube-system(1d5a1bc5-63f2-41a2-84b0-e2d2a5e693f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-xll5r_kube-system(1d5a1bc5-63f2-41a2-84b0-e2d2a5e693f8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"98824ced2291bcae58418b06792e0a438933aaf35f03f14e2c5bd3c96ce2e4f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-xll5r" podUID="1d5a1bc5-63f2-41a2-84b0-e2d2a5e693f8" Jan 20 02:50:10.783281 containerd[1640]: time="2026-01-20T02:50:10.783143222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d45576995-lqpp4,Uid:adda9552-df9a-4757-9456-d2fe24c1f167,Namespace:calico-system,Attempt:0,}" Jan 20 02:50:10.789758 containerd[1640]: time="2026-01-20T02:50:10.789723404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zb7gt,Uid:2beb3373-3a79-403b-953d-80d6dc35b793,Namespace:calico-system,Attempt:0,}" Jan 20 02:50:11.835827 containerd[1640]: time="2026-01-20T02:50:11.835727704Z" level=error msg="Failed to destroy network for sandbox \"ce6b86e67de8acddfec9c3dc988ce02aeab44ebac8eb926a4cf232d0e9c14c39\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:11.868807 systemd[1]: run-netns-cni\x2d4d185b57\x2dd940\x2d0bb8\x2d3f64\x2d252399c143a0.mount: Deactivated successfully. Jan 20 02:50:11.899351 containerd[1640]: time="2026-01-20T02:50:11.899162294Z" level=error msg="Failed to destroy network for sandbox \"fa501afc60e317b1563dafd379908cf04c0e5b84dbde419b24d100db9cef46cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:11.924762 systemd[1]: run-netns-cni\x2dca648853\x2df0b7\x2d51b1\x2d95b4\x2d07ba7b5b2130.mount: Deactivated successfully. Jan 20 02:50:11.932890 containerd[1640]: time="2026-01-20T02:50:11.931420307Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d45576995-lqpp4,Uid:adda9552-df9a-4757-9456-d2fe24c1f167,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce6b86e67de8acddfec9c3dc988ce02aeab44ebac8eb926a4cf232d0e9c14c39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:11.933136 kubelet[2963]: E0120 02:50:11.932008 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce6b86e67de8acddfec9c3dc988ce02aeab44ebac8eb926a4cf232d0e9c14c39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:11.933136 kubelet[2963]: E0120 02:50:11.932097 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce6b86e67de8acddfec9c3dc988ce02aeab44ebac8eb926a4cf232d0e9c14c39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d45576995-lqpp4" Jan 20 02:50:11.933136 kubelet[2963]: E0120 02:50:11.932124 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce6b86e67de8acddfec9c3dc988ce02aeab44ebac8eb926a4cf232d0e9c14c39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d45576995-lqpp4" Jan 20 02:50:11.933710 kubelet[2963]: E0120 02:50:11.932187 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5d45576995-lqpp4_calico-system(adda9552-df9a-4757-9456-d2fe24c1f167)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5d45576995-lqpp4_calico-system(adda9552-df9a-4757-9456-d2fe24c1f167)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce6b86e67de8acddfec9c3dc988ce02aeab44ebac8eb926a4cf232d0e9c14c39\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5d45576995-lqpp4" podUID="adda9552-df9a-4757-9456-d2fe24c1f167" Jan 20 02:50:12.002151 containerd[1640]: time="2026-01-20T02:50:11.986230796Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zb7gt,Uid:2beb3373-3a79-403b-953d-80d6dc35b793,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa501afc60e317b1563dafd379908cf04c0e5b84dbde419b24d100db9cef46cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:12.002397 kubelet[2963]: E0120 02:50:11.986721 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa501afc60e317b1563dafd379908cf04c0e5b84dbde419b24d100db9cef46cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:12.002397 kubelet[2963]: E0120 02:50:11.986789 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa501afc60e317b1563dafd379908cf04c0e5b84dbde419b24d100db9cef46cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zb7gt" Jan 20 02:50:12.002397 kubelet[2963]: E0120 02:50:11.986818 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa501afc60e317b1563dafd379908cf04c0e5b84dbde419b24d100db9cef46cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zb7gt" Jan 20 02:50:12.002628 kubelet[2963]: E0120 02:50:11.986881 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zb7gt_calico-system(2beb3373-3a79-403b-953d-80d6dc35b793)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zb7gt_calico-system(2beb3373-3a79-403b-953d-80d6dc35b793)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa501afc60e317b1563dafd379908cf04c0e5b84dbde419b24d100db9cef46cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:50:12.758362 containerd[1640]: time="2026-01-20T02:50:12.757694681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99b79f8fd-9fwc6,Uid:67615726-cef8-44da-a26c-7795f613fcbb,Namespace:calico-apiserver,Attempt:0,}" Jan 20 02:50:13.937367 containerd[1640]: time="2026-01-20T02:50:13.937151662Z" level=error msg="Failed to destroy network for sandbox \"cb10471cbb7607c76b95bc6384f1a5ebbe6996f7e794402113eb7f598c905ca7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:13.945679 systemd[1]: run-netns-cni\x2d05d6cc85\x2d9a35\x2d6acd\x2d0a99\x2db137b09315e4.mount: Deactivated successfully. Jan 20 02:50:14.065315 containerd[1640]: time="2026-01-20T02:50:14.064960716Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99b79f8fd-9fwc6,Uid:67615726-cef8-44da-a26c-7795f613fcbb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb10471cbb7607c76b95bc6384f1a5ebbe6996f7e794402113eb7f598c905ca7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:14.072958 kubelet[2963]: E0120 02:50:14.072792 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb10471cbb7607c76b95bc6384f1a5ebbe6996f7e794402113eb7f598c905ca7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:14.072958 kubelet[2963]: E0120 02:50:14.072881 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb10471cbb7607c76b95bc6384f1a5ebbe6996f7e794402113eb7f598c905ca7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" Jan 20 02:50:14.072958 kubelet[2963]: E0120 02:50:14.072952 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb10471cbb7607c76b95bc6384f1a5ebbe6996f7e794402113eb7f598c905ca7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" Jan 20 02:50:14.073728 kubelet[2963]: E0120 02:50:14.073016 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-99b79f8fd-9fwc6_calico-apiserver(67615726-cef8-44da-a26c-7795f613fcbb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-99b79f8fd-9fwc6_calico-apiserver(67615726-cef8-44da-a26c-7795f613fcbb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb10471cbb7607c76b95bc6384f1a5ebbe6996f7e794402113eb7f598c905ca7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:50:14.723281 containerd[1640]: time="2026-01-20T02:50:14.722749528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99b79f8fd-h8mhs,Uid:78de0405-4f44-497e-8007-519223ee3a61,Namespace:calico-apiserver,Attempt:0,}" Jan 20 02:50:15.155419 containerd[1640]: time="2026-01-20T02:50:15.140352010Z" level=error msg="Failed to destroy network for sandbox \"de839c412fd3dec0d08cb0a909d055b91e7919d6a4fb2a88e197e4d3ef920305\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:15.166362 containerd[1640]: time="2026-01-20T02:50:15.158823377Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99b79f8fd-h8mhs,Uid:78de0405-4f44-497e-8007-519223ee3a61,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"de839c412fd3dec0d08cb0a909d055b91e7919d6a4fb2a88e197e4d3ef920305\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:15.166723 kubelet[2963]: E0120 02:50:15.159332 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de839c412fd3dec0d08cb0a909d055b91e7919d6a4fb2a88e197e4d3ef920305\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:15.166723 kubelet[2963]: E0120 02:50:15.159403 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de839c412fd3dec0d08cb0a909d055b91e7919d6a4fb2a88e197e4d3ef920305\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" Jan 20 02:50:15.166723 kubelet[2963]: E0120 02:50:15.159430 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de839c412fd3dec0d08cb0a909d055b91e7919d6a4fb2a88e197e4d3ef920305\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" Jan 20 02:50:15.167315 kubelet[2963]: E0120 02:50:15.159556 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-99b79f8fd-h8mhs_calico-apiserver(78de0405-4f44-497e-8007-519223ee3a61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-99b79f8fd-h8mhs_calico-apiserver(78de0405-4f44-497e-8007-519223ee3a61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de839c412fd3dec0d08cb0a909d055b91e7919d6a4fb2a88e197e4d3ef920305\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:50:15.169413 systemd[1]: run-netns-cni\x2d91d4449c\x2d5e3a\x2d461c\x2d4bcf\x2d7d8732557235.mount: Deactivated successfully. Jan 20 02:50:18.925549 kubelet[2963]: E0120 02:50:18.925141 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:18.940254 containerd[1640]: time="2026-01-20T02:50:18.935444839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-czxwc,Uid:74212177-3278-4b1c-8a68-155074b2aa8f,Namespace:kube-system,Attempt:0,}" Jan 20 02:50:19.449045 containerd[1640]: time="2026-01-20T02:50:19.438656004Z" level=error msg="Failed to destroy network for sandbox \"48ba476ee25b81f348c107b937bfedf412d9895776834b8f770d54c26d678833\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:19.450450 systemd[1]: run-netns-cni\x2d7cdc3f79\x2d2fe2\x2da16f\x2d5884\x2defcad65409b0.mount: Deactivated successfully. Jan 20 02:50:19.481207 containerd[1640]: time="2026-01-20T02:50:19.481131551Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-czxwc,Uid:74212177-3278-4b1c-8a68-155074b2aa8f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"48ba476ee25b81f348c107b937bfedf412d9895776834b8f770d54c26d678833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:19.482205 kubelet[2963]: E0120 02:50:19.482129 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48ba476ee25b81f348c107b937bfedf412d9895776834b8f770d54c26d678833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:19.482644 kubelet[2963]: E0120 02:50:19.482468 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48ba476ee25b81f348c107b937bfedf412d9895776834b8f770d54c26d678833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-czxwc" Jan 20 02:50:19.482867 kubelet[2963]: E0120 02:50:19.482586 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48ba476ee25b81f348c107b937bfedf412d9895776834b8f770d54c26d678833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-czxwc" Jan 20 02:50:19.483206 kubelet[2963]: E0120 02:50:19.483048 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-czxwc_kube-system(74212177-3278-4b1c-8a68-155074b2aa8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-czxwc_kube-system(74212177-3278-4b1c-8a68-155074b2aa8f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48ba476ee25b81f348c107b937bfedf412d9895776834b8f770d54c26d678833\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-czxwc" podUID="74212177-3278-4b1c-8a68-155074b2aa8f" Jan 20 02:50:21.965742 kubelet[2963]: E0120 02:50:21.964808 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:21.973753 containerd[1640]: time="2026-01-20T02:50:21.970407165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xll5r,Uid:1d5a1bc5-63f2-41a2-84b0-e2d2a5e693f8,Namespace:kube-system,Attempt:0,}" Jan 20 02:50:21.990574 containerd[1640]: time="2026-01-20T02:50:21.989297958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-5hks8,Uid:2048147f-559b-4756-8896-b644ce0ae95e,Namespace:calico-system,Attempt:0,}" Jan 20 02:50:22.384759 containerd[1640]: time="2026-01-20T02:50:22.368396282Z" level=error msg="Failed to destroy network for sandbox \"56793c7aaf2e427619524b45b84fe1c60ba99d08fec96ac08c11f9d69196fd95\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:22.384759 containerd[1640]: time="2026-01-20T02:50:22.377829102Z" level=error msg="Failed to destroy network for sandbox \"b20c619ce55141763211d6391b2be4703b1d064986ce800c9d374a379d62e21f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:22.397644 systemd[1]: run-netns-cni\x2dac0bc8b8\x2de359\x2d3b47\x2d829e\x2d39e683054ae7.mount: Deactivated successfully. Jan 20 02:50:22.432637 containerd[1640]: time="2026-01-20T02:50:22.432422272Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xll5r,Uid:1d5a1bc5-63f2-41a2-84b0-e2d2a5e693f8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b20c619ce55141763211d6391b2be4703b1d064986ce800c9d374a379d62e21f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:22.439462 kubelet[2963]: E0120 02:50:22.434177 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b20c619ce55141763211d6391b2be4703b1d064986ce800c9d374a379d62e21f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:22.439462 kubelet[2963]: E0120 02:50:22.438040 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b20c619ce55141763211d6391b2be4703b1d064986ce800c9d374a379d62e21f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xll5r" Jan 20 02:50:22.439462 kubelet[2963]: E0120 02:50:22.438125 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b20c619ce55141763211d6391b2be4703b1d064986ce800c9d374a379d62e21f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xll5r" Jan 20 02:50:22.439055 systemd[1]: run-netns-cni\x2d4b6c0ede\x2de3df\x2dd053\x2d220e\x2df676aec1d26a.mount: Deactivated successfully. Jan 20 02:50:22.439828 kubelet[2963]: E0120 02:50:22.438268 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-xll5r_kube-system(1d5a1bc5-63f2-41a2-84b0-e2d2a5e693f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-xll5r_kube-system(1d5a1bc5-63f2-41a2-84b0-e2d2a5e693f8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b20c619ce55141763211d6391b2be4703b1d064986ce800c9d374a379d62e21f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-xll5r" podUID="1d5a1bc5-63f2-41a2-84b0-e2d2a5e693f8" Jan 20 02:50:22.471555 containerd[1640]: time="2026-01-20T02:50:22.471308632Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-5hks8,Uid:2048147f-559b-4756-8896-b644ce0ae95e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"56793c7aaf2e427619524b45b84fe1c60ba99d08fec96ac08c11f9d69196fd95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:22.472446 kubelet[2963]: E0120 02:50:22.472343 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56793c7aaf2e427619524b45b84fe1c60ba99d08fec96ac08c11f9d69196fd95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:22.472830 kubelet[2963]: E0120 02:50:22.472798 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56793c7aaf2e427619524b45b84fe1c60ba99d08fec96ac08c11f9d69196fd95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-5hks8" Jan 20 02:50:22.473103 kubelet[2963]: E0120 02:50:22.472956 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56793c7aaf2e427619524b45b84fe1c60ba99d08fec96ac08c11f9d69196fd95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-5hks8" Jan 20 02:50:22.473978 kubelet[2963]: E0120 02:50:22.473838 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-5hks8_calico-system(2048147f-559b-4756-8896-b644ce0ae95e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-5hks8_calico-system(2048147f-559b-4756-8896-b644ce0ae95e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"56793c7aaf2e427619524b45b84fe1c60ba99d08fec96ac08c11f9d69196fd95\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:50:22.731803 containerd[1640]: time="2026-01-20T02:50:22.726830052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-554b6967f8-4mv9r,Uid:9eab50e8-9c7c-4942-9bf1-628e8f6481c8,Namespace:calico-system,Attempt:0,}" Jan 20 02:50:23.244338 containerd[1640]: time="2026-01-20T02:50:23.240439180Z" level=error msg="Failed to destroy network for sandbox \"9c57d378dd5ddaae0355dff94e7e48d9c2c325e0b55fccda3ab9bb66cff5d333\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:23.259802 systemd[1]: run-netns-cni\x2d79106d4d\x2d4751\x2d3ca6\x2d0806\x2d772cd86fb53b.mount: Deactivated successfully. Jan 20 02:50:23.280798 containerd[1640]: time="2026-01-20T02:50:23.280734640Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-554b6967f8-4mv9r,Uid:9eab50e8-9c7c-4942-9bf1-628e8f6481c8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c57d378dd5ddaae0355dff94e7e48d9c2c325e0b55fccda3ab9bb66cff5d333\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:23.281887 kubelet[2963]: E0120 02:50:23.281686 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c57d378dd5ddaae0355dff94e7e48d9c2c325e0b55fccda3ab9bb66cff5d333\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:23.281887 kubelet[2963]: E0120 02:50:23.281786 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c57d378dd5ddaae0355dff94e7e48d9c2c325e0b55fccda3ab9bb66cff5d333\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" Jan 20 02:50:23.281887 kubelet[2963]: E0120 02:50:23.281825 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c57d378dd5ddaae0355dff94e7e48d9c2c325e0b55fccda3ab9bb66cff5d333\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" Jan 20 02:50:23.282458 kubelet[2963]: E0120 02:50:23.281921 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-554b6967f8-4mv9r_calico-system(9eab50e8-9c7c-4942-9bf1-628e8f6481c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-554b6967f8-4mv9r_calico-system(9eab50e8-9c7c-4942-9bf1-628e8f6481c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c57d378dd5ddaae0355dff94e7e48d9c2c325e0b55fccda3ab9bb66cff5d333\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:50:23.738992 containerd[1640]: time="2026-01-20T02:50:23.737951310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d45576995-lqpp4,Uid:adda9552-df9a-4757-9456-d2fe24c1f167,Namespace:calico-system,Attempt:0,}" Jan 20 02:50:23.754648 containerd[1640]: time="2026-01-20T02:50:23.754200559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zb7gt,Uid:2beb3373-3a79-403b-953d-80d6dc35b793,Namespace:calico-system,Attempt:0,}" Jan 20 02:50:24.290732 containerd[1640]: time="2026-01-20T02:50:24.288653300Z" level=error msg="Failed to destroy network for sandbox \"46956c63c818407e2d477c4e8443ffa6a3916eda9d6148ee706ad1e2b08b8372\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:24.294660 systemd[1]: run-netns-cni\x2d3d20db9d\x2d59c0\x2d92fa\x2d70d6\x2d90764bdc5ffe.mount: Deactivated successfully. Jan 20 02:50:24.397557 containerd[1640]: time="2026-01-20T02:50:24.397331938Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zb7gt,Uid:2beb3373-3a79-403b-953d-80d6dc35b793,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"46956c63c818407e2d477c4e8443ffa6a3916eda9d6148ee706ad1e2b08b8372\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:24.406950 kubelet[2963]: E0120 02:50:24.400793 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46956c63c818407e2d477c4e8443ffa6a3916eda9d6148ee706ad1e2b08b8372\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:24.406950 kubelet[2963]: E0120 02:50:24.400906 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46956c63c818407e2d477c4e8443ffa6a3916eda9d6148ee706ad1e2b08b8372\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zb7gt" Jan 20 02:50:24.406950 kubelet[2963]: E0120 02:50:24.400935 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46956c63c818407e2d477c4e8443ffa6a3916eda9d6148ee706ad1e2b08b8372\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zb7gt" Jan 20 02:50:24.407893 kubelet[2963]: E0120 02:50:24.401001 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zb7gt_calico-system(2beb3373-3a79-403b-953d-80d6dc35b793)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zb7gt_calico-system(2beb3373-3a79-403b-953d-80d6dc35b793)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46956c63c818407e2d477c4e8443ffa6a3916eda9d6148ee706ad1e2b08b8372\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:50:24.433255 containerd[1640]: time="2026-01-20T02:50:24.433194021Z" level=error msg="Failed to destroy network for sandbox \"40875870feec6e8d02df11553d7bb79a39f1987ea3f9a2289b9c6d9e0ded8943\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:24.446158 systemd[1]: run-netns-cni\x2daf19c184\x2d51e3\x2dc166\x2d34ba\x2da4d7225a43d2.mount: Deactivated successfully. Jan 20 02:50:24.578466 containerd[1640]: time="2026-01-20T02:50:24.578218306Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d45576995-lqpp4,Uid:adda9552-df9a-4757-9456-d2fe24c1f167,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"40875870feec6e8d02df11553d7bb79a39f1987ea3f9a2289b9c6d9e0ded8943\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:24.581536 kubelet[2963]: E0120 02:50:24.581249 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40875870feec6e8d02df11553d7bb79a39f1987ea3f9a2289b9c6d9e0ded8943\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:24.581536 kubelet[2963]: E0120 02:50:24.581325 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40875870feec6e8d02df11553d7bb79a39f1987ea3f9a2289b9c6d9e0ded8943\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d45576995-lqpp4" Jan 20 02:50:24.581536 kubelet[2963]: E0120 02:50:24.581351 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40875870feec6e8d02df11553d7bb79a39f1987ea3f9a2289b9c6d9e0ded8943\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d45576995-lqpp4" Jan 20 02:50:24.581749 kubelet[2963]: E0120 02:50:24.581414 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5d45576995-lqpp4_calico-system(adda9552-df9a-4757-9456-d2fe24c1f167)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5d45576995-lqpp4_calico-system(adda9552-df9a-4757-9456-d2fe24c1f167)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40875870feec6e8d02df11553d7bb79a39f1987ea3f9a2289b9c6d9e0ded8943\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5d45576995-lqpp4" podUID="adda9552-df9a-4757-9456-d2fe24c1f167" Jan 20 02:50:25.340050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1308673030.mount: Deactivated successfully. Jan 20 02:50:25.473587 containerd[1640]: time="2026-01-20T02:50:25.473454622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:50:25.530115 containerd[1640]: time="2026-01-20T02:50:25.491575439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 20 02:50:25.530115 containerd[1640]: time="2026-01-20T02:50:25.499391254Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:50:25.530115 containerd[1640]: time="2026-01-20T02:50:25.511396100Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 1m15.951124608s" Jan 20 02:50:25.530115 containerd[1640]: time="2026-01-20T02:50:25.520604592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 20 02:50:25.530115 containerd[1640]: time="2026-01-20T02:50:25.529029766Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:50:25.607777 containerd[1640]: time="2026-01-20T02:50:25.607580436Z" level=info msg="CreateContainer within sandbox \"fe22b359ac4f72c1a466351f8b644dda9b776debaec609a0d098fc799125cb73\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 20 02:50:25.687800 containerd[1640]: time="2026-01-20T02:50:25.685578268Z" level=info msg="Container 5289b65548ac5de6a984e5ba32993b7200563eb849a240f79cdf9cbdfe5063ef: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:50:25.745439 containerd[1640]: time="2026-01-20T02:50:25.744768280Z" level=info msg="CreateContainer within sandbox \"fe22b359ac4f72c1a466351f8b644dda9b776debaec609a0d098fc799125cb73\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5289b65548ac5de6a984e5ba32993b7200563eb849a240f79cdf9cbdfe5063ef\"" Jan 20 02:50:25.754838 containerd[1640]: time="2026-01-20T02:50:25.754752833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99b79f8fd-9fwc6,Uid:67615726-cef8-44da-a26c-7795f613fcbb,Namespace:calico-apiserver,Attempt:0,}" Jan 20 02:50:25.756112 containerd[1640]: time="2026-01-20T02:50:25.756030180Z" level=info msg="StartContainer for \"5289b65548ac5de6a984e5ba32993b7200563eb849a240f79cdf9cbdfe5063ef\"" Jan 20 02:50:25.783799 containerd[1640]: time="2026-01-20T02:50:25.782650709Z" level=info msg="connecting to shim 5289b65548ac5de6a984e5ba32993b7200563eb849a240f79cdf9cbdfe5063ef" address="unix:///run/containerd/s/d8ec0a28bcc384d55232d12ce135ed8167c4dec58bcf840747662ab19c1f5f8f" protocol=ttrpc version=3 Jan 20 02:50:26.103117 systemd[1]: Started cri-containerd-5289b65548ac5de6a984e5ba32993b7200563eb849a240f79cdf9cbdfe5063ef.scope - libcontainer container 5289b65548ac5de6a984e5ba32993b7200563eb849a240f79cdf9cbdfe5063ef. Jan 20 02:50:26.436883 containerd[1640]: time="2026-01-20T02:50:26.428376331Z" level=error msg="Failed to destroy network for sandbox \"252bd7e0df1a38b709dbce1aa5e5e11f4574d4ca27d6e14d98327fe6d8f62f68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:26.443996 systemd[1]: run-netns-cni\x2dc10d7a3c\x2d9ad8\x2dc27a\x2d896a\x2d9fc08a5ceb80.mount: Deactivated successfully. Jan 20 02:50:26.458000 audit: BPF prog-id=172 op=LOAD Jan 20 02:50:26.472976 kernel: audit: type=1334 audit(1768877426.458:572): prog-id=172 op=LOAD Jan 20 02:50:26.458000 audit[5177]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3481 pid=5177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:26.531717 kernel: audit: type=1300 audit(1768877426.458:572): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3481 pid=5177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:26.531916 kernel: audit: type=1327 audit(1768877426.458:572): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532383962363535343861633564653661393834653562613332393933 Jan 20 02:50:26.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532383962363535343861633564653661393834653562613332393933 Jan 20 02:50:26.458000 audit: BPF prog-id=173 op=LOAD Jan 20 02:50:26.566942 kernel: audit: type=1334 audit(1768877426.458:573): prog-id=173 op=LOAD Jan 20 02:50:26.567075 kernel: audit: type=1300 audit(1768877426.458:573): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3481 pid=5177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:26.458000 audit[5177]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3481 pid=5177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:26.628391 kernel: audit: type=1327 audit(1768877426.458:573): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532383962363535343861633564653661393834653562613332393933 Jan 20 02:50:26.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532383962363535343861633564653661393834653562613332393933 Jan 20 02:50:26.458000 audit: BPF prog-id=173 op=UNLOAD Jan 20 02:50:26.689645 kernel: audit: type=1334 audit(1768877426.458:574): prog-id=173 op=UNLOAD Jan 20 02:50:26.458000 audit[5177]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3481 pid=5177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:26.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532383962363535343861633564653661393834653562613332393933 Jan 20 02:50:26.793639 containerd[1640]: time="2026-01-20T02:50:26.793467253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99b79f8fd-h8mhs,Uid:78de0405-4f44-497e-8007-519223ee3a61,Namespace:calico-apiserver,Attempt:0,}" Jan 20 02:50:26.800029 containerd[1640]: time="2026-01-20T02:50:26.797619896Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99b79f8fd-9fwc6,Uid:67615726-cef8-44da-a26c-7795f613fcbb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"252bd7e0df1a38b709dbce1aa5e5e11f4574d4ca27d6e14d98327fe6d8f62f68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:26.800614 kubelet[2963]: E0120 02:50:26.800560 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"252bd7e0df1a38b709dbce1aa5e5e11f4574d4ca27d6e14d98327fe6d8f62f68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:26.810134 kernel: audit: type=1300 audit(1768877426.458:574): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3481 pid=5177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:26.810465 kernel: audit: type=1327 audit(1768877426.458:574): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532383962363535343861633564653661393834653562613332393933 Jan 20 02:50:26.810978 kernel: audit: type=1334 audit(1768877426.458:575): prog-id=172 op=UNLOAD Jan 20 02:50:26.458000 audit: BPF prog-id=172 op=UNLOAD Jan 20 02:50:26.811527 kubelet[2963]: E0120 02:50:26.811389 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"252bd7e0df1a38b709dbce1aa5e5e11f4574d4ca27d6e14d98327fe6d8f62f68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" Jan 20 02:50:26.811811 kubelet[2963]: E0120 02:50:26.811682 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"252bd7e0df1a38b709dbce1aa5e5e11f4574d4ca27d6e14d98327fe6d8f62f68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" Jan 20 02:50:26.834048 kubelet[2963]: E0120 02:50:26.817117 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-99b79f8fd-9fwc6_calico-apiserver(67615726-cef8-44da-a26c-7795f613fcbb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-99b79f8fd-9fwc6_calico-apiserver(67615726-cef8-44da-a26c-7795f613fcbb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"252bd7e0df1a38b709dbce1aa5e5e11f4574d4ca27d6e14d98327fe6d8f62f68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:50:26.458000 audit[5177]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3481 pid=5177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:26.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532383962363535343861633564653661393834653562613332393933 Jan 20 02:50:26.458000 audit: BPF prog-id=174 op=LOAD Jan 20 02:50:26.458000 audit[5177]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3481 pid=5177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:26.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532383962363535343861633564653661393834653562613332393933 Jan 20 02:50:26.945786 containerd[1640]: time="2026-01-20T02:50:26.943880968Z" level=info msg="StartContainer for \"5289b65548ac5de6a984e5ba32993b7200563eb849a240f79cdf9cbdfe5063ef\" returns successfully" Jan 20 02:50:27.039589 kubelet[2963]: E0120 02:50:27.034253 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:28.008546 containerd[1640]: time="2026-01-20T02:50:28.004102293Z" level=error msg="Failed to destroy network for sandbox \"353e627bf70cd5dfa5d683eee3f9ad9fab07f465fba2f5b805fa2f966fe8230b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:28.042110 systemd[1]: run-netns-cni\x2db93d90df\x2d49be\x2d8a79\x2d8478\x2dc8527fb303e5.mount: Deactivated successfully. Jan 20 02:50:28.068636 kubelet[2963]: E0120 02:50:28.063650 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:28.078548 containerd[1640]: time="2026-01-20T02:50:28.078295040Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99b79f8fd-h8mhs,Uid:78de0405-4f44-497e-8007-519223ee3a61,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"353e627bf70cd5dfa5d683eee3f9ad9fab07f465fba2f5b805fa2f966fe8230b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:28.082141 kubelet[2963]: E0120 02:50:28.082012 2963 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"353e627bf70cd5dfa5d683eee3f9ad9fab07f465fba2f5b805fa2f966fe8230b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:50:28.082388 kubelet[2963]: E0120 02:50:28.082313 2963 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"353e627bf70cd5dfa5d683eee3f9ad9fab07f465fba2f5b805fa2f966fe8230b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" Jan 20 02:50:28.082388 kubelet[2963]: E0120 02:50:28.082344 2963 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"353e627bf70cd5dfa5d683eee3f9ad9fab07f465fba2f5b805fa2f966fe8230b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" Jan 20 02:50:28.082818 kubelet[2963]: E0120 02:50:28.082723 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-99b79f8fd-h8mhs_calico-apiserver(78de0405-4f44-497e-8007-519223ee3a61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-99b79f8fd-h8mhs_calico-apiserver(78de0405-4f44-497e-8007-519223ee3a61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"353e627bf70cd5dfa5d683eee3f9ad9fab07f465fba2f5b805fa2f966fe8230b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:50:29.075752 kubelet[2963]: I0120 02:50:29.064225 2963 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 02:50:29.075752 kubelet[2963]: E0120 02:50:29.064951 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:29.633350 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 20 02:50:29.634640 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 20 02:50:31.834293 kubelet[2963]: I0120 02:50:31.834060 2963 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qft95" podStartSLOduration=8.429089092 podStartE2EDuration="2m18.834032546s" podCreationTimestamp="2026-01-20 02:48:13 +0000 UTC" firstStartedPulling="2026-01-20 02:48:15.128104118 +0000 UTC m=+74.383170566" lastFinishedPulling="2026-01-20 02:50:25.533047572 +0000 UTC m=+204.788114020" observedRunningTime="2026-01-20 02:50:27.529930958 +0000 UTC m=+206.784997426" watchObservedRunningTime="2026-01-20 02:50:31.834032546 +0000 UTC m=+211.089098995" Jan 20 02:50:32.122320 kubelet[2963]: I0120 02:50:32.122161 2963 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adda9552-df9a-4757-9456-d2fe24c1f167-whisker-ca-bundle\") pod \"adda9552-df9a-4757-9456-d2fe24c1f167\" (UID: \"adda9552-df9a-4757-9456-d2fe24c1f167\") " Jan 20 02:50:32.123394 kubelet[2963]: I0120 02:50:32.122604 2963 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/adda9552-df9a-4757-9456-d2fe24c1f167-whisker-backend-key-pair\") pod \"adda9552-df9a-4757-9456-d2fe24c1f167\" (UID: \"adda9552-df9a-4757-9456-d2fe24c1f167\") " Jan 20 02:50:32.123394 kubelet[2963]: I0120 02:50:32.122655 2963 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2r9k2\" (UniqueName: \"kubernetes.io/projected/adda9552-df9a-4757-9456-d2fe24c1f167-kube-api-access-2r9k2\") pod \"adda9552-df9a-4757-9456-d2fe24c1f167\" (UID: \"adda9552-df9a-4757-9456-d2fe24c1f167\") " Jan 20 02:50:32.163576 kubelet[2963]: I0120 02:50:32.160422 2963 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adda9552-df9a-4757-9456-d2fe24c1f167-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "adda9552-df9a-4757-9456-d2fe24c1f167" (UID: "adda9552-df9a-4757-9456-d2fe24c1f167"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 02:50:32.229634 kubelet[2963]: I0120 02:50:32.225215 2963 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adda9552-df9a-4757-9456-d2fe24c1f167-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 20 02:50:32.261925 kubelet[2963]: I0120 02:50:32.257589 2963 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adda9552-df9a-4757-9456-d2fe24c1f167-kube-api-access-2r9k2" (OuterVolumeSpecName: "kube-api-access-2r9k2") pod "adda9552-df9a-4757-9456-d2fe24c1f167" (UID: "adda9552-df9a-4757-9456-d2fe24c1f167"). InnerVolumeSpecName "kube-api-access-2r9k2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 02:50:32.260905 systemd[1]: var-lib-kubelet-pods-adda9552\x2ddf9a\x2d4757\x2d9456\x2dd2fe24c1f167-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2r9k2.mount: Deactivated successfully. Jan 20 02:50:32.261120 systemd[1]: var-lib-kubelet-pods-adda9552\x2ddf9a\x2d4757\x2d9456\x2dd2fe24c1f167-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 20 02:50:32.288462 kubelet[2963]: I0120 02:50:32.288369 2963 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adda9552-df9a-4757-9456-d2fe24c1f167-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "adda9552-df9a-4757-9456-d2fe24c1f167" (UID: "adda9552-df9a-4757-9456-d2fe24c1f167"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 02:50:32.331892 kubelet[2963]: I0120 02:50:32.331779 2963 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/adda9552-df9a-4757-9456-d2fe24c1f167-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 20 02:50:32.331892 kubelet[2963]: I0120 02:50:32.331817 2963 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2r9k2\" (UniqueName: \"kubernetes.io/projected/adda9552-df9a-4757-9456-d2fe24c1f167-kube-api-access-2r9k2\") on node \"localhost\" DevicePath \"\"" Jan 20 02:50:32.471089 systemd[1]: Removed slice kubepods-besteffort-podadda9552_df9a_4757_9456_d2fe24c1f167.slice - libcontainer container kubepods-besteffort-podadda9552_df9a_4757_9456_d2fe24c1f167.slice. Jan 20 02:50:33.392621 kubelet[2963]: I0120 02:50:33.387976 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ea0ad3c0-ee09-401c-8807-5b06e8d22025-whisker-backend-key-pair\") pod \"whisker-68fcfd7799-l9qd2\" (UID: \"ea0ad3c0-ee09-401c-8807-5b06e8d22025\") " pod="calico-system/whisker-68fcfd7799-l9qd2" Jan 20 02:50:33.392621 kubelet[2963]: I0120 02:50:33.388088 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea0ad3c0-ee09-401c-8807-5b06e8d22025-whisker-ca-bundle\") pod \"whisker-68fcfd7799-l9qd2\" (UID: \"ea0ad3c0-ee09-401c-8807-5b06e8d22025\") " pod="calico-system/whisker-68fcfd7799-l9qd2" Jan 20 02:50:33.392621 kubelet[2963]: I0120 02:50:33.388129 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjjqn\" (UniqueName: \"kubernetes.io/projected/ea0ad3c0-ee09-401c-8807-5b06e8d22025-kube-api-access-kjjqn\") pod \"whisker-68fcfd7799-l9qd2\" (UID: \"ea0ad3c0-ee09-401c-8807-5b06e8d22025\") " pod="calico-system/whisker-68fcfd7799-l9qd2" Jan 20 02:50:33.439618 systemd[1]: Created slice kubepods-besteffort-podea0ad3c0_ee09_401c_8807_5b06e8d22025.slice - libcontainer container kubepods-besteffort-podea0ad3c0_ee09_401c_8807_5b06e8d22025.slice. Jan 20 02:50:33.734327 kubelet[2963]: E0120 02:50:33.731447 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:33.739527 containerd[1640]: time="2026-01-20T02:50:33.739364574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xll5r,Uid:1d5a1bc5-63f2-41a2-84b0-e2d2a5e693f8,Namespace:kube-system,Attempt:0,}" Jan 20 02:50:33.800253 kubelet[2963]: I0120 02:50:33.796875 2963 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adda9552-df9a-4757-9456-d2fe24c1f167" path="/var/lib/kubelet/pods/adda9552-df9a-4757-9456-d2fe24c1f167/volumes" Jan 20 02:50:33.872230 containerd[1640]: time="2026-01-20T02:50:33.871852923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68fcfd7799-l9qd2,Uid:ea0ad3c0-ee09-401c-8807-5b06e8d22025,Namespace:calico-system,Attempt:0,}" Jan 20 02:50:34.789130 kubelet[2963]: E0120 02:50:34.787765 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:34.814873 containerd[1640]: time="2026-01-20T02:50:34.811958629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-czxwc,Uid:74212177-3278-4b1c-8a68-155074b2aa8f,Namespace:kube-system,Attempt:0,}" Jan 20 02:50:35.766156 kubelet[2963]: E0120 02:50:35.759911 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:35.792412 containerd[1640]: time="2026-01-20T02:50:35.792362471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-5hks8,Uid:2048147f-559b-4756-8896-b644ce0ae95e,Namespace:calico-system,Attempt:0,}" Jan 20 02:50:36.727584 containerd[1640]: time="2026-01-20T02:50:36.726004978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-554b6967f8-4mv9r,Uid:9eab50e8-9c7c-4942-9bf1-628e8f6481c8,Namespace:calico-system,Attempt:0,}" Jan 20 02:50:36.734565 containerd[1640]: time="2026-01-20T02:50:36.734409904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zb7gt,Uid:2beb3373-3a79-403b-953d-80d6dc35b793,Namespace:calico-system,Attempt:0,}" Jan 20 02:50:36.988763 systemd-networkd[1544]: cali0bedf2e42c2: Link UP Jan 20 02:50:36.999732 systemd-networkd[1544]: cali0bedf2e42c2: Gained carrier Jan 20 02:50:37.266670 containerd[1640]: 2026-01-20 02:50:34.541 [INFO][5346] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 02:50:37.266670 containerd[1640]: 2026-01-20 02:50:34.826 [INFO][5346] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--xll5r-eth0 coredns-66bc5c9577- kube-system 1d5a1bc5-63f2-41a2-84b0-e2d2a5e693f8 1140 0 2026-01-20 02:47:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-xll5r eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0bedf2e42c2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="94298c98ceb717fb8dc281d15a62ec0679639822efa0efbf09b3db4fd54049d5" Namespace="kube-system" Pod="coredns-66bc5c9577-xll5r" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xll5r-" Jan 20 02:50:37.266670 containerd[1640]: 2026-01-20 02:50:34.847 [INFO][5346] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="94298c98ceb717fb8dc281d15a62ec0679639822efa0efbf09b3db4fd54049d5" Namespace="kube-system" Pod="coredns-66bc5c9577-xll5r" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xll5r-eth0" Jan 20 02:50:37.266670 containerd[1640]: 2026-01-20 02:50:36.199 [INFO][5378] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="94298c98ceb717fb8dc281d15a62ec0679639822efa0efbf09b3db4fd54049d5" HandleID="k8s-pod-network.94298c98ceb717fb8dc281d15a62ec0679639822efa0efbf09b3db4fd54049d5" Workload="localhost-k8s-coredns--66bc5c9577--xll5r-eth0" Jan 20 02:50:37.283971 containerd[1640]: 2026-01-20 02:50:36.213 [INFO][5378] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="94298c98ceb717fb8dc281d15a62ec0679639822efa0efbf09b3db4fd54049d5" HandleID="k8s-pod-network.94298c98ceb717fb8dc281d15a62ec0679639822efa0efbf09b3db4fd54049d5" Workload="localhost-k8s-coredns--66bc5c9577--xll5r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004addd0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-xll5r", "timestamp":"2026-01-20 02:50:36.199832428 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 02:50:37.283971 containerd[1640]: 2026-01-20 02:50:36.214 [INFO][5378] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 02:50:37.283971 containerd[1640]: 2026-01-20 02:50:36.215 [INFO][5378] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 02:50:37.283971 containerd[1640]: 2026-01-20 02:50:36.217 [INFO][5378] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 02:50:37.283971 containerd[1640]: 2026-01-20 02:50:36.288 [INFO][5378] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.94298c98ceb717fb8dc281d15a62ec0679639822efa0efbf09b3db4fd54049d5" host="localhost" Jan 20 02:50:37.283971 containerd[1640]: 2026-01-20 02:50:36.406 [INFO][5378] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 02:50:37.283971 containerd[1640]: 2026-01-20 02:50:36.472 [INFO][5378] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 02:50:37.283971 containerd[1640]: 2026-01-20 02:50:36.489 [INFO][5378] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 02:50:37.283971 containerd[1640]: 2026-01-20 02:50:36.525 [INFO][5378] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 02:50:37.283971 containerd[1640]: 2026-01-20 02:50:36.533 [INFO][5378] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.94298c98ceb717fb8dc281d15a62ec0679639822efa0efbf09b3db4fd54049d5" host="localhost" Jan 20 02:50:37.284395 containerd[1640]: 2026-01-20 02:50:36.597 [INFO][5378] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.94298c98ceb717fb8dc281d15a62ec0679639822efa0efbf09b3db4fd54049d5 Jan 20 02:50:37.284395 containerd[1640]: 2026-01-20 02:50:36.635 [INFO][5378] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.94298c98ceb717fb8dc281d15a62ec0679639822efa0efbf09b3db4fd54049d5" host="localhost" Jan 20 02:50:37.284395 containerd[1640]: 2026-01-20 02:50:36.705 [INFO][5378] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.94298c98ceb717fb8dc281d15a62ec0679639822efa0efbf09b3db4fd54049d5" host="localhost" Jan 20 02:50:37.284395 containerd[1640]: 2026-01-20 02:50:36.705 [INFO][5378] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.94298c98ceb717fb8dc281d15a62ec0679639822efa0efbf09b3db4fd54049d5" host="localhost" Jan 20 02:50:37.284395 containerd[1640]: 2026-01-20 02:50:36.705 [INFO][5378] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 02:50:37.284395 containerd[1640]: 2026-01-20 02:50:36.705 [INFO][5378] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="94298c98ceb717fb8dc281d15a62ec0679639822efa0efbf09b3db4fd54049d5" HandleID="k8s-pod-network.94298c98ceb717fb8dc281d15a62ec0679639822efa0efbf09b3db4fd54049d5" Workload="localhost-k8s-coredns--66bc5c9577--xll5r-eth0" Jan 20 02:50:37.285195 containerd[1640]: 2026-01-20 02:50:36.729 [INFO][5346] cni-plugin/k8s.go 418: Populated endpoint ContainerID="94298c98ceb717fb8dc281d15a62ec0679639822efa0efbf09b3db4fd54049d5" Namespace="kube-system" Pod="coredns-66bc5c9577-xll5r" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xll5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--xll5r-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1d5a1bc5-63f2-41a2-84b0-e2d2a5e693f8", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 2, 47, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-xll5r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0bedf2e42c2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 02:50:37.285195 containerd[1640]: 2026-01-20 02:50:36.730 [INFO][5346] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="94298c98ceb717fb8dc281d15a62ec0679639822efa0efbf09b3db4fd54049d5" Namespace="kube-system" Pod="coredns-66bc5c9577-xll5r" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xll5r-eth0" Jan 20 02:50:37.285195 containerd[1640]: 2026-01-20 02:50:36.730 [INFO][5346] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0bedf2e42c2 ContainerID="94298c98ceb717fb8dc281d15a62ec0679639822efa0efbf09b3db4fd54049d5" Namespace="kube-system" Pod="coredns-66bc5c9577-xll5r" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xll5r-eth0" Jan 20 02:50:37.285195 containerd[1640]: 2026-01-20 02:50:37.011 [INFO][5346] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="94298c98ceb717fb8dc281d15a62ec0679639822efa0efbf09b3db4fd54049d5" Namespace="kube-system" Pod="coredns-66bc5c9577-xll5r" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xll5r-eth0" Jan 20 02:50:37.285195 containerd[1640]: 2026-01-20 02:50:37.047 [INFO][5346] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="94298c98ceb717fb8dc281d15a62ec0679639822efa0efbf09b3db4fd54049d5" Namespace="kube-system" Pod="coredns-66bc5c9577-xll5r" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xll5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--xll5r-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1d5a1bc5-63f2-41a2-84b0-e2d2a5e693f8", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 2, 47, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"94298c98ceb717fb8dc281d15a62ec0679639822efa0efbf09b3db4fd54049d5", Pod:"coredns-66bc5c9577-xll5r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0bedf2e42c2", MAC:"c2:7a:48:f2:50:48", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 02:50:37.285195 containerd[1640]: 2026-01-20 02:50:37.231 [INFO][5346] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="94298c98ceb717fb8dc281d15a62ec0679639822efa0efbf09b3db4fd54049d5" Namespace="kube-system" Pod="coredns-66bc5c9577-xll5r" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xll5r-eth0" Jan 20 02:50:37.658016 systemd-networkd[1544]: cali31ec259a10a: Link UP Jan 20 02:50:37.661576 systemd-networkd[1544]: cali31ec259a10a: Gained carrier Jan 20 02:50:37.780550 containerd[1640]: 2026-01-20 02:50:35.226 [INFO][5382] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 02:50:37.780550 containerd[1640]: 2026-01-20 02:50:35.321 [INFO][5382] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--czxwc-eth0 coredns-66bc5c9577- kube-system 74212177-3278-4b1c-8a68-155074b2aa8f 1138 0 2026-01-20 02:47:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-czxwc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali31ec259a10a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="7b37c1571772930cd97ab19f3ac0dc463a541b9009e430014d6cb6d88c28c1e3" Namespace="kube-system" Pod="coredns-66bc5c9577-czxwc" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--czxwc-" Jan 20 02:50:37.780550 containerd[1640]: 2026-01-20 02:50:35.383 [INFO][5382] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7b37c1571772930cd97ab19f3ac0dc463a541b9009e430014d6cb6d88c28c1e3" Namespace="kube-system" Pod="coredns-66bc5c9577-czxwc" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--czxwc-eth0" Jan 20 02:50:37.780550 containerd[1640]: 2026-01-20 02:50:36.212 [INFO][5404] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7b37c1571772930cd97ab19f3ac0dc463a541b9009e430014d6cb6d88c28c1e3" HandleID="k8s-pod-network.7b37c1571772930cd97ab19f3ac0dc463a541b9009e430014d6cb6d88c28c1e3" Workload="localhost-k8s-coredns--66bc5c9577--czxwc-eth0" Jan 20 02:50:37.780550 containerd[1640]: 2026-01-20 02:50:36.215 [INFO][5404] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7b37c1571772930cd97ab19f3ac0dc463a541b9009e430014d6cb6d88c28c1e3" HandleID="k8s-pod-network.7b37c1571772930cd97ab19f3ac0dc463a541b9009e430014d6cb6d88c28c1e3" Workload="localhost-k8s-coredns--66bc5c9577--czxwc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002a7250), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-czxwc", "timestamp":"2026-01-20 02:50:36.208683188 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 02:50:37.780550 containerd[1640]: 2026-01-20 02:50:36.216 [INFO][5404] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 02:50:37.780550 containerd[1640]: 2026-01-20 02:50:36.706 [INFO][5404] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 02:50:37.780550 containerd[1640]: 2026-01-20 02:50:36.706 [INFO][5404] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 02:50:37.780550 containerd[1640]: 2026-01-20 02:50:36.775 [INFO][5404] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7b37c1571772930cd97ab19f3ac0dc463a541b9009e430014d6cb6d88c28c1e3" host="localhost" Jan 20 02:50:37.780550 containerd[1640]: 2026-01-20 02:50:36.832 [INFO][5404] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 02:50:37.780550 containerd[1640]: 2026-01-20 02:50:36.919 [INFO][5404] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 02:50:37.780550 containerd[1640]: 2026-01-20 02:50:37.001 [INFO][5404] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 02:50:37.780550 containerd[1640]: 2026-01-20 02:50:37.130 [INFO][5404] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 02:50:37.780550 containerd[1640]: 2026-01-20 02:50:37.131 [INFO][5404] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7b37c1571772930cd97ab19f3ac0dc463a541b9009e430014d6cb6d88c28c1e3" host="localhost" Jan 20 02:50:37.780550 containerd[1640]: 2026-01-20 02:50:37.222 [INFO][5404] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7b37c1571772930cd97ab19f3ac0dc463a541b9009e430014d6cb6d88c28c1e3 Jan 20 02:50:37.780550 containerd[1640]: 2026-01-20 02:50:37.416 [INFO][5404] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7b37c1571772930cd97ab19f3ac0dc463a541b9009e430014d6cb6d88c28c1e3" host="localhost" Jan 20 02:50:37.780550 containerd[1640]: 2026-01-20 02:50:37.489 [INFO][5404] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.7b37c1571772930cd97ab19f3ac0dc463a541b9009e430014d6cb6d88c28c1e3" host="localhost" Jan 20 02:50:37.780550 containerd[1640]: 2026-01-20 02:50:37.489 [INFO][5404] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.7b37c1571772930cd97ab19f3ac0dc463a541b9009e430014d6cb6d88c28c1e3" host="localhost" Jan 20 02:50:37.780550 containerd[1640]: 2026-01-20 02:50:37.489 [INFO][5404] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 02:50:37.780550 containerd[1640]: 2026-01-20 02:50:37.489 [INFO][5404] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="7b37c1571772930cd97ab19f3ac0dc463a541b9009e430014d6cb6d88c28c1e3" HandleID="k8s-pod-network.7b37c1571772930cd97ab19f3ac0dc463a541b9009e430014d6cb6d88c28c1e3" Workload="localhost-k8s-coredns--66bc5c9577--czxwc-eth0" Jan 20 02:50:37.782117 containerd[1640]: 2026-01-20 02:50:37.573 [INFO][5382] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7b37c1571772930cd97ab19f3ac0dc463a541b9009e430014d6cb6d88c28c1e3" Namespace="kube-system" Pod="coredns-66bc5c9577-czxwc" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--czxwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--czxwc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"74212177-3278-4b1c-8a68-155074b2aa8f", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 2, 47, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-czxwc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali31ec259a10a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 02:50:37.782117 containerd[1640]: 2026-01-20 02:50:37.573 [INFO][5382] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="7b37c1571772930cd97ab19f3ac0dc463a541b9009e430014d6cb6d88c28c1e3" Namespace="kube-system" Pod="coredns-66bc5c9577-czxwc" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--czxwc-eth0" Jan 20 02:50:37.782117 containerd[1640]: 2026-01-20 02:50:37.575 [INFO][5382] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali31ec259a10a ContainerID="7b37c1571772930cd97ab19f3ac0dc463a541b9009e430014d6cb6d88c28c1e3" Namespace="kube-system" Pod="coredns-66bc5c9577-czxwc" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--czxwc-eth0" Jan 20 02:50:37.782117 containerd[1640]: 2026-01-20 02:50:37.664 [INFO][5382] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7b37c1571772930cd97ab19f3ac0dc463a541b9009e430014d6cb6d88c28c1e3" Namespace="kube-system" Pod="coredns-66bc5c9577-czxwc" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--czxwc-eth0" Jan 20 02:50:37.782117 containerd[1640]: 2026-01-20 02:50:37.666 [INFO][5382] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7b37c1571772930cd97ab19f3ac0dc463a541b9009e430014d6cb6d88c28c1e3" Namespace="kube-system" Pod="coredns-66bc5c9577-czxwc" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--czxwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--czxwc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"74212177-3278-4b1c-8a68-155074b2aa8f", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 2, 47, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7b37c1571772930cd97ab19f3ac0dc463a541b9009e430014d6cb6d88c28c1e3", Pod:"coredns-66bc5c9577-czxwc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali31ec259a10a", MAC:"9e:a1:d8:f4:0f:61", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 02:50:37.782117 containerd[1640]: 2026-01-20 02:50:37.740 [INFO][5382] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7b37c1571772930cd97ab19f3ac0dc463a541b9009e430014d6cb6d88c28c1e3" Namespace="kube-system" Pod="coredns-66bc5c9577-czxwc" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--czxwc-eth0" Jan 20 02:50:37.810938 containerd[1640]: time="2026-01-20T02:50:37.807758729Z" level=info msg="connecting to shim 94298c98ceb717fb8dc281d15a62ec0679639822efa0efbf09b3db4fd54049d5" address="unix:///run/containerd/s/e3a56ae713bdd2db737b97f52520449702fd0ba5b50347d5a51ad08927264840" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:50:38.209444 containerd[1640]: time="2026-01-20T02:50:38.202425542Z" level=info msg="connecting to shim 7b37c1571772930cd97ab19f3ac0dc463a541b9009e430014d6cb6d88c28c1e3" address="unix:///run/containerd/s/1abc3a894dda9216156d41b1bae1a1438922c3b0628a5974373353f0d393f686" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:50:38.367025 systemd-networkd[1544]: cali0f709d2181a: Link UP Jan 20 02:50:38.385690 systemd-networkd[1544]: cali0f709d2181a: Gained carrier Jan 20 02:50:38.501321 systemd[1]: Started cri-containerd-94298c98ceb717fb8dc281d15a62ec0679639822efa0efbf09b3db4fd54049d5.scope - libcontainer container 94298c98ceb717fb8dc281d15a62ec0679639822efa0efbf09b3db4fd54049d5. Jan 20 02:50:38.521980 containerd[1640]: 2026-01-20 02:50:34.594 [INFO][5360] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 02:50:38.521980 containerd[1640]: 2026-01-20 02:50:34.848 [INFO][5360] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--68fcfd7799--l9qd2-eth0 whisker-68fcfd7799- calico-system ea0ad3c0-ee09-401c-8807-5b06e8d22025 1359 0 2026-01-20 02:50:33 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:68fcfd7799 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-68fcfd7799-l9qd2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0f709d2181a [] [] }} ContainerID="0331213baf44381cbe64cfdd73f03274771822323f6a0252f1fc3946557c63de" Namespace="calico-system" Pod="whisker-68fcfd7799-l9qd2" WorkloadEndpoint="localhost-k8s-whisker--68fcfd7799--l9qd2-" Jan 20 02:50:38.521980 containerd[1640]: 2026-01-20 02:50:34.848 [INFO][5360] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0331213baf44381cbe64cfdd73f03274771822323f6a0252f1fc3946557c63de" Namespace="calico-system" Pod="whisker-68fcfd7799-l9qd2" WorkloadEndpoint="localhost-k8s-whisker--68fcfd7799--l9qd2-eth0" Jan 20 02:50:38.521980 containerd[1640]: 2026-01-20 02:50:36.216 [INFO][5380] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0331213baf44381cbe64cfdd73f03274771822323f6a0252f1fc3946557c63de" HandleID="k8s-pod-network.0331213baf44381cbe64cfdd73f03274771822323f6a0252f1fc3946557c63de" Workload="localhost-k8s-whisker--68fcfd7799--l9qd2-eth0" Jan 20 02:50:38.521980 containerd[1640]: 2026-01-20 02:50:36.218 [INFO][5380] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0331213baf44381cbe64cfdd73f03274771822323f6a0252f1fc3946557c63de" HandleID="k8s-pod-network.0331213baf44381cbe64cfdd73f03274771822323f6a0252f1fc3946557c63de" Workload="localhost-k8s-whisker--68fcfd7799--l9qd2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b4200), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-68fcfd7799-l9qd2", "timestamp":"2026-01-20 02:50:36.216626988 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 02:50:38.521980 containerd[1640]: 2026-01-20 02:50:36.218 [INFO][5380] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 02:50:38.521980 containerd[1640]: 2026-01-20 02:50:37.498 [INFO][5380] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 02:50:38.521980 containerd[1640]: 2026-01-20 02:50:37.498 [INFO][5380] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 02:50:38.521980 containerd[1640]: 2026-01-20 02:50:37.689 [INFO][5380] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0331213baf44381cbe64cfdd73f03274771822323f6a0252f1fc3946557c63de" host="localhost" Jan 20 02:50:38.521980 containerd[1640]: 2026-01-20 02:50:37.781 [INFO][5380] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 02:50:38.521980 containerd[1640]: 2026-01-20 02:50:37.941 [INFO][5380] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 02:50:38.521980 containerd[1640]: 2026-01-20 02:50:37.963 [INFO][5380] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 02:50:38.521980 containerd[1640]: 2026-01-20 02:50:37.984 [INFO][5380] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 02:50:38.521980 containerd[1640]: 2026-01-20 02:50:37.985 [INFO][5380] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0331213baf44381cbe64cfdd73f03274771822323f6a0252f1fc3946557c63de" host="localhost" Jan 20 02:50:38.521980 containerd[1640]: 2026-01-20 02:50:38.004 [INFO][5380] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0331213baf44381cbe64cfdd73f03274771822323f6a0252f1fc3946557c63de Jan 20 02:50:38.521980 containerd[1640]: 2026-01-20 02:50:38.171 [INFO][5380] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0331213baf44381cbe64cfdd73f03274771822323f6a0252f1fc3946557c63de" host="localhost" Jan 20 02:50:38.521980 containerd[1640]: 2026-01-20 02:50:38.260 [INFO][5380] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.0331213baf44381cbe64cfdd73f03274771822323f6a0252f1fc3946557c63de" host="localhost" Jan 20 02:50:38.521980 containerd[1640]: 2026-01-20 02:50:38.263 [INFO][5380] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.0331213baf44381cbe64cfdd73f03274771822323f6a0252f1fc3946557c63de" host="localhost" Jan 20 02:50:38.521980 containerd[1640]: 2026-01-20 02:50:38.268 [INFO][5380] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 02:50:38.521980 containerd[1640]: 2026-01-20 02:50:38.268 [INFO][5380] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="0331213baf44381cbe64cfdd73f03274771822323f6a0252f1fc3946557c63de" HandleID="k8s-pod-network.0331213baf44381cbe64cfdd73f03274771822323f6a0252f1fc3946557c63de" Workload="localhost-k8s-whisker--68fcfd7799--l9qd2-eth0" Jan 20 02:50:38.523190 containerd[1640]: 2026-01-20 02:50:38.314 [INFO][5360] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0331213baf44381cbe64cfdd73f03274771822323f6a0252f1fc3946557c63de" Namespace="calico-system" Pod="whisker-68fcfd7799-l9qd2" WorkloadEndpoint="localhost-k8s-whisker--68fcfd7799--l9qd2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--68fcfd7799--l9qd2-eth0", GenerateName:"whisker-68fcfd7799-", Namespace:"calico-system", SelfLink:"", UID:"ea0ad3c0-ee09-401c-8807-5b06e8d22025", ResourceVersion:"1359", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 2, 50, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"68fcfd7799", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-68fcfd7799-l9qd2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0f709d2181a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 02:50:38.523190 containerd[1640]: 2026-01-20 02:50:38.315 [INFO][5360] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="0331213baf44381cbe64cfdd73f03274771822323f6a0252f1fc3946557c63de" Namespace="calico-system" Pod="whisker-68fcfd7799-l9qd2" WorkloadEndpoint="localhost-k8s-whisker--68fcfd7799--l9qd2-eth0" Jan 20 02:50:38.523190 containerd[1640]: 2026-01-20 02:50:38.315 [INFO][5360] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f709d2181a ContainerID="0331213baf44381cbe64cfdd73f03274771822323f6a0252f1fc3946557c63de" Namespace="calico-system" Pod="whisker-68fcfd7799-l9qd2" WorkloadEndpoint="localhost-k8s-whisker--68fcfd7799--l9qd2-eth0" Jan 20 02:50:38.523190 containerd[1640]: 2026-01-20 02:50:38.366 [INFO][5360] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0331213baf44381cbe64cfdd73f03274771822323f6a0252f1fc3946557c63de" Namespace="calico-system" Pod="whisker-68fcfd7799-l9qd2" WorkloadEndpoint="localhost-k8s-whisker--68fcfd7799--l9qd2-eth0" Jan 20 02:50:38.523190 containerd[1640]: 2026-01-20 02:50:38.381 [INFO][5360] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0331213baf44381cbe64cfdd73f03274771822323f6a0252f1fc3946557c63de" Namespace="calico-system" Pod="whisker-68fcfd7799-l9qd2" WorkloadEndpoint="localhost-k8s-whisker--68fcfd7799--l9qd2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--68fcfd7799--l9qd2-eth0", GenerateName:"whisker-68fcfd7799-", Namespace:"calico-system", SelfLink:"", UID:"ea0ad3c0-ee09-401c-8807-5b06e8d22025", ResourceVersion:"1359", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 2, 50, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"68fcfd7799", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0331213baf44381cbe64cfdd73f03274771822323f6a0252f1fc3946557c63de", Pod:"whisker-68fcfd7799-l9qd2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0f709d2181a", MAC:"1e:5a:d3:a6:39:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 02:50:38.523190 containerd[1640]: 2026-01-20 02:50:38.495 [INFO][5360] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0331213baf44381cbe64cfdd73f03274771822323f6a0252f1fc3946557c63de" Namespace="calico-system" Pod="whisker-68fcfd7799-l9qd2" WorkloadEndpoint="localhost-k8s-whisker--68fcfd7799--l9qd2-eth0" Jan 20 02:50:38.603375 kernel: kauditd_printk_skb: 5 callbacks suppressed Jan 20 02:50:38.603889 kernel: audit: type=1334 audit(1768877438.591:577): prog-id=175 op=LOAD Jan 20 02:50:38.591000 audit: BPF prog-id=175 op=LOAD Jan 20 02:50:38.606000 audit: BPF prog-id=176 op=LOAD Jan 20 02:50:38.631154 kernel: audit: type=1334 audit(1768877438.606:578): prog-id=176 op=LOAD Jan 20 02:50:38.606000 audit[5627]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=5590 pid=5627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:38.640261 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 02:50:38.606000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934323938633938636562373137666238646332383164313561363265 Jan 20 02:50:38.759913 kernel: audit: type=1300 audit(1768877438.606:578): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=5590 pid=5627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:38.760076 kernel: audit: type=1327 audit(1768877438.606:578): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934323938633938636562373137666238646332383164313561363265 Jan 20 02:50:38.610000 audit: BPF prog-id=176 op=UNLOAD Jan 20 02:50:38.789819 kernel: audit: type=1334 audit(1768877438.610:579): prog-id=176 op=UNLOAD Jan 20 02:50:38.794955 kernel: audit: type=1300 audit(1768877438.610:579): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5590 pid=5627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:38.610000 audit[5627]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5590 pid=5627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:38.789352 systemd[1]: Started cri-containerd-7b37c1571772930cd97ab19f3ac0dc463a541b9009e430014d6cb6d88c28c1e3.scope - libcontainer container 7b37c1571772930cd97ab19f3ac0dc463a541b9009e430014d6cb6d88c28c1e3. Jan 20 02:50:38.863997 kernel: audit: type=1327 audit(1768877438.610:579): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934323938633938636562373137666238646332383164313561363265 Jan 20 02:50:38.610000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934323938633938636562373137666238646332383164313561363265 Jan 20 02:50:38.919242 kernel: audit: type=1334 audit(1768877438.613:580): prog-id=177 op=LOAD Jan 20 02:50:38.613000 audit: BPF prog-id=177 op=LOAD Jan 20 02:50:38.961737 kernel: audit: type=1300 audit(1768877438.613:580): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=5590 pid=5627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:38.613000 audit[5627]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=5590 pid=5627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:38.613000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934323938633938636562373137666238646332383164313561363265 Jan 20 02:50:38.983741 kernel: audit: type=1327 audit(1768877438.613:580): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934323938633938636562373137666238646332383164313561363265 Jan 20 02:50:38.614000 audit: BPF prog-id=178 op=LOAD Jan 20 02:50:38.614000 audit[5627]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=5590 pid=5627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:38.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934323938633938636562373137666238646332383164313561363265 Jan 20 02:50:38.617000 audit: BPF prog-id=178 op=UNLOAD Jan 20 02:50:38.617000 audit[5627]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5590 pid=5627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:38.617000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934323938633938636562373137666238646332383164313561363265 Jan 20 02:50:38.617000 audit: BPF prog-id=177 op=UNLOAD Jan 20 02:50:38.617000 audit[5627]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5590 pid=5627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:38.617000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934323938633938636562373137666238646332383164313561363265 Jan 20 02:50:38.617000 audit: BPF prog-id=179 op=LOAD Jan 20 02:50:38.617000 audit[5627]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=5590 pid=5627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:38.617000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934323938633938636562373137666238646332383164313561363265 Jan 20 02:50:38.993967 systemd-networkd[1544]: cali0bedf2e42c2: Gained IPv6LL Jan 20 02:50:39.199271 containerd[1640]: time="2026-01-20T02:50:39.183634183Z" level=info msg="connecting to shim 0331213baf44381cbe64cfdd73f03274771822323f6a0252f1fc3946557c63de" address="unix:///run/containerd/s/6ee475dcc11bb0f202114b4c71a4bae3231dd0669f74e11d024306dbfd9ab282" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:50:39.292000 audit: BPF prog-id=180 op=LOAD Jan 20 02:50:39.302357 systemd-networkd[1544]: calif45fae8e2ed: Link UP Jan 20 02:50:39.307300 systemd-networkd[1544]: calif45fae8e2ed: Gained carrier Jan 20 02:50:39.303000 audit: BPF prog-id=181 op=LOAD Jan 20 02:50:39.303000 audit[5667]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5638 pid=5667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:39.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762333763313537313737323933306364393761623139663361633064 Jan 20 02:50:39.303000 audit: BPF prog-id=181 op=UNLOAD Jan 20 02:50:39.303000 audit[5667]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5638 pid=5667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:39.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762333763313537313737323933306364393761623139663361633064 Jan 20 02:50:39.303000 audit: BPF prog-id=182 op=LOAD Jan 20 02:50:39.303000 audit[5667]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5638 pid=5667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:39.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762333763313537313737323933306364393761623139663361633064 Jan 20 02:50:39.303000 audit: BPF prog-id=183 op=LOAD Jan 20 02:50:39.303000 audit[5667]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5638 pid=5667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:39.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762333763313537313737323933306364393761623139663361633064 Jan 20 02:50:39.303000 audit: BPF prog-id=183 op=UNLOAD Jan 20 02:50:39.303000 audit[5667]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5638 pid=5667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:39.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762333763313537313737323933306364393761623139663361633064 Jan 20 02:50:39.303000 audit: BPF prog-id=182 op=UNLOAD Jan 20 02:50:39.303000 audit[5667]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5638 pid=5667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:39.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762333763313537313737323933306364393761623139663361633064 Jan 20 02:50:39.303000 audit: BPF prog-id=184 op=LOAD Jan 20 02:50:39.303000 audit[5667]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5638 pid=5667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:39.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762333763313537313737323933306364393761623139663361633064 Jan 20 02:50:39.340958 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 02:50:39.524634 containerd[1640]: 2026-01-20 02:50:36.006 [INFO][5420] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 02:50:39.524634 containerd[1640]: 2026-01-20 02:50:36.131 [INFO][5420] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--5hks8-eth0 goldmane-7c778bb748- calico-system 2048147f-559b-4756-8896-b644ce0ae95e 1126 0 2026-01-20 02:48:03 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-5hks8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif45fae8e2ed [] [] }} ContainerID="631ea0caf9b95ba2054068a47cf706665ac80082b2feb6ef8a03b36b345a4f96" Namespace="calico-system" Pod="goldmane-7c778bb748-5hks8" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--5hks8-" Jan 20 02:50:39.524634 containerd[1640]: 2026-01-20 02:50:36.131 [INFO][5420] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="631ea0caf9b95ba2054068a47cf706665ac80082b2feb6ef8a03b36b345a4f96" Namespace="calico-system" Pod="goldmane-7c778bb748-5hks8" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--5hks8-eth0" Jan 20 02:50:39.524634 containerd[1640]: 2026-01-20 02:50:36.291 [INFO][5444] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="631ea0caf9b95ba2054068a47cf706665ac80082b2feb6ef8a03b36b345a4f96" HandleID="k8s-pod-network.631ea0caf9b95ba2054068a47cf706665ac80082b2feb6ef8a03b36b345a4f96" Workload="localhost-k8s-goldmane--7c778bb748--5hks8-eth0" Jan 20 02:50:39.524634 containerd[1640]: 2026-01-20 02:50:36.300 [INFO][5444] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="631ea0caf9b95ba2054068a47cf706665ac80082b2feb6ef8a03b36b345a4f96" HandleID="k8s-pod-network.631ea0caf9b95ba2054068a47cf706665ac80082b2feb6ef8a03b36b345a4f96" Workload="localhost-k8s-goldmane--7c778bb748--5hks8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033d670), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-5hks8", "timestamp":"2026-01-20 02:50:36.291211745 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 02:50:39.524634 containerd[1640]: 2026-01-20 02:50:36.300 [INFO][5444] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 02:50:39.524634 containerd[1640]: 2026-01-20 02:50:38.263 [INFO][5444] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 02:50:39.524634 containerd[1640]: 2026-01-20 02:50:38.263 [INFO][5444] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 02:50:39.524634 containerd[1640]: 2026-01-20 02:50:38.517 [INFO][5444] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.631ea0caf9b95ba2054068a47cf706665ac80082b2feb6ef8a03b36b345a4f96" host="localhost" Jan 20 02:50:39.524634 containerd[1640]: 2026-01-20 02:50:38.709 [INFO][5444] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 02:50:39.524634 containerd[1640]: 2026-01-20 02:50:38.895 [INFO][5444] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 02:50:39.524634 containerd[1640]: 2026-01-20 02:50:39.043 [INFO][5444] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 02:50:39.524634 containerd[1640]: 2026-01-20 02:50:39.096 [INFO][5444] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 02:50:39.524634 containerd[1640]: 2026-01-20 02:50:39.102 [INFO][5444] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.631ea0caf9b95ba2054068a47cf706665ac80082b2feb6ef8a03b36b345a4f96" host="localhost" Jan 20 02:50:39.524634 containerd[1640]: 2026-01-20 02:50:39.113 [INFO][5444] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.631ea0caf9b95ba2054068a47cf706665ac80082b2feb6ef8a03b36b345a4f96 Jan 20 02:50:39.524634 containerd[1640]: 2026-01-20 02:50:39.167 [INFO][5444] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.631ea0caf9b95ba2054068a47cf706665ac80082b2feb6ef8a03b36b345a4f96" host="localhost" Jan 20 02:50:39.524634 containerd[1640]: 2026-01-20 02:50:39.255 [INFO][5444] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.631ea0caf9b95ba2054068a47cf706665ac80082b2feb6ef8a03b36b345a4f96" host="localhost" Jan 20 02:50:39.524634 containerd[1640]: 2026-01-20 02:50:39.255 [INFO][5444] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.631ea0caf9b95ba2054068a47cf706665ac80082b2feb6ef8a03b36b345a4f96" host="localhost" Jan 20 02:50:39.524634 containerd[1640]: 2026-01-20 02:50:39.255 [INFO][5444] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 02:50:39.524634 containerd[1640]: 2026-01-20 02:50:39.255 [INFO][5444] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="631ea0caf9b95ba2054068a47cf706665ac80082b2feb6ef8a03b36b345a4f96" HandleID="k8s-pod-network.631ea0caf9b95ba2054068a47cf706665ac80082b2feb6ef8a03b36b345a4f96" Workload="localhost-k8s-goldmane--7c778bb748--5hks8-eth0" Jan 20 02:50:39.525808 containerd[1640]: 2026-01-20 02:50:39.277 [INFO][5420] cni-plugin/k8s.go 418: Populated endpoint ContainerID="631ea0caf9b95ba2054068a47cf706665ac80082b2feb6ef8a03b36b345a4f96" Namespace="calico-system" Pod="goldmane-7c778bb748-5hks8" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--5hks8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--5hks8-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"2048147f-559b-4756-8896-b644ce0ae95e", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 2, 48, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-5hks8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif45fae8e2ed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 02:50:39.525808 containerd[1640]: 2026-01-20 02:50:39.278 [INFO][5420] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="631ea0caf9b95ba2054068a47cf706665ac80082b2feb6ef8a03b36b345a4f96" Namespace="calico-system" Pod="goldmane-7c778bb748-5hks8" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--5hks8-eth0" Jan 20 02:50:39.525808 containerd[1640]: 2026-01-20 02:50:39.278 [INFO][5420] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif45fae8e2ed ContainerID="631ea0caf9b95ba2054068a47cf706665ac80082b2feb6ef8a03b36b345a4f96" Namespace="calico-system" Pod="goldmane-7c778bb748-5hks8" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--5hks8-eth0" Jan 20 02:50:39.525808 containerd[1640]: 2026-01-20 02:50:39.304 [INFO][5420] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="631ea0caf9b95ba2054068a47cf706665ac80082b2feb6ef8a03b36b345a4f96" Namespace="calico-system" Pod="goldmane-7c778bb748-5hks8" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--5hks8-eth0" Jan 20 02:50:39.525808 containerd[1640]: 2026-01-20 02:50:39.308 [INFO][5420] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="631ea0caf9b95ba2054068a47cf706665ac80082b2feb6ef8a03b36b345a4f96" Namespace="calico-system" Pod="goldmane-7c778bb748-5hks8" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--5hks8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--5hks8-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"2048147f-559b-4756-8896-b644ce0ae95e", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 2, 48, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"631ea0caf9b95ba2054068a47cf706665ac80082b2feb6ef8a03b36b345a4f96", Pod:"goldmane-7c778bb748-5hks8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif45fae8e2ed", MAC:"6a:52:4b:74:36:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 02:50:39.525808 containerd[1640]: 2026-01-20 02:50:39.409 [INFO][5420] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="631ea0caf9b95ba2054068a47cf706665ac80082b2feb6ef8a03b36b345a4f96" Namespace="calico-system" Pod="goldmane-7c778bb748-5hks8" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--5hks8-eth0" Jan 20 02:50:39.566869 containerd[1640]: time="2026-01-20T02:50:39.565899344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xll5r,Uid:1d5a1bc5-63f2-41a2-84b0-e2d2a5e693f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"94298c98ceb717fb8dc281d15a62ec0679639822efa0efbf09b3db4fd54049d5\"" Jan 20 02:50:39.582206 kubelet[2963]: E0120 02:50:39.568595 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:39.569757 systemd-networkd[1544]: cali31ec259a10a: Gained IPv6LL Jan 20 02:50:39.601099 containerd[1640]: time="2026-01-20T02:50:39.600632117Z" level=info msg="CreateContainer within sandbox \"94298c98ceb717fb8dc281d15a62ec0679639822efa0efbf09b3db4fd54049d5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 02:50:39.626941 systemd[1]: Started cri-containerd-0331213baf44381cbe64cfdd73f03274771822323f6a0252f1fc3946557c63de.scope - libcontainer container 0331213baf44381cbe64cfdd73f03274771822323f6a0252f1fc3946557c63de. Jan 20 02:50:39.713630 containerd[1640]: time="2026-01-20T02:50:39.713572523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-czxwc,Uid:74212177-3278-4b1c-8a68-155074b2aa8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b37c1571772930cd97ab19f3ac0dc463a541b9009e430014d6cb6d88c28c1e3\"" Jan 20 02:50:39.733988 kubelet[2963]: E0120 02:50:39.725925 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:39.754369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1641781268.mount: Deactivated successfully. Jan 20 02:50:39.759817 containerd[1640]: time="2026-01-20T02:50:39.759007135Z" level=info msg="CreateContainer within sandbox \"7b37c1571772930cd97ab19f3ac0dc463a541b9009e430014d6cb6d88c28c1e3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 02:50:39.768445 containerd[1640]: time="2026-01-20T02:50:39.767894515Z" level=info msg="Container b9f3d9d99bdc570b2d630ce0072d79b533c2f553e8776e0cf0bf5c15db0d6663: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:50:39.795000 audit: BPF prog-id=185 op=LOAD Jan 20 02:50:39.797000 audit: BPF prog-id=186 op=LOAD Jan 20 02:50:39.797000 audit[5733]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5702 pid=5733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:39.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033333132313362616634343338316362653634636664643733663033 Jan 20 02:50:39.797000 audit: BPF prog-id=186 op=UNLOAD Jan 20 02:50:39.797000 audit[5733]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5702 pid=5733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:39.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033333132313362616634343338316362653634636664643733663033 Jan 20 02:50:39.799000 audit: BPF prog-id=187 op=LOAD Jan 20 02:50:39.799000 audit[5733]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5702 pid=5733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:39.799000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033333132313362616634343338316362653634636664643733663033 Jan 20 02:50:39.799000 audit: BPF prog-id=188 op=LOAD Jan 20 02:50:39.799000 audit[5733]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5702 pid=5733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:39.799000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033333132313362616634343338316362653634636664643733663033 Jan 20 02:50:39.799000 audit: BPF prog-id=188 op=UNLOAD Jan 20 02:50:39.799000 audit[5733]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5702 pid=5733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:39.799000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033333132313362616634343338316362653634636664643733663033 Jan 20 02:50:39.799000 audit: BPF prog-id=187 op=UNLOAD Jan 20 02:50:39.799000 audit[5733]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5702 pid=5733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:39.799000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033333132313362616634343338316362653634636664643733663033 Jan 20 02:50:39.799000 audit: BPF prog-id=189 op=LOAD Jan 20 02:50:39.799000 audit[5733]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5702 pid=5733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:39.799000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033333132313362616634343338316362653634636664643733663033 Jan 20 02:50:39.817905 containerd[1640]: time="2026-01-20T02:50:39.817119714Z" level=info msg="CreateContainer within sandbox \"94298c98ceb717fb8dc281d15a62ec0679639822efa0efbf09b3db4fd54049d5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b9f3d9d99bdc570b2d630ce0072d79b533c2f553e8776e0cf0bf5c15db0d6663\"" Jan 20 02:50:39.823584 containerd[1640]: time="2026-01-20T02:50:39.822637559Z" level=info msg="StartContainer for \"b9f3d9d99bdc570b2d630ce0072d79b533c2f553e8776e0cf0bf5c15db0d6663\"" Jan 20 02:50:39.829445 containerd[1640]: time="2026-01-20T02:50:39.828806054Z" level=info msg="Container 4c6a6d9be2bdf703d27ca53edab10a825c2986b9aab44cb29800841e8c5dba5f: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:50:39.831804 containerd[1640]: time="2026-01-20T02:50:39.831682151Z" level=info msg="connecting to shim b9f3d9d99bdc570b2d630ce0072d79b533c2f553e8776e0cf0bf5c15db0d6663" address="unix:///run/containerd/s/e3a56ae713bdd2db737b97f52520449702fd0ba5b50347d5a51ad08927264840" protocol=ttrpc version=3 Jan 20 02:50:39.835955 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 02:50:39.847198 containerd[1640]: time="2026-01-20T02:50:39.844039625Z" level=info msg="connecting to shim 631ea0caf9b95ba2054068a47cf706665ac80082b2feb6ef8a03b36b345a4f96" address="unix:///run/containerd/s/937c0debfe9bb895552f9fbe63ae88f3a7785bb8451b72c3b44d10facd169b13" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:50:39.852831 containerd[1640]: time="2026-01-20T02:50:39.852589149Z" level=info msg="CreateContainer within sandbox \"7b37c1571772930cd97ab19f3ac0dc463a541b9009e430014d6cb6d88c28c1e3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4c6a6d9be2bdf703d27ca53edab10a825c2986b9aab44cb29800841e8c5dba5f\"" Jan 20 02:50:39.869624 containerd[1640]: time="2026-01-20T02:50:39.869559727Z" level=info msg="StartContainer for \"4c6a6d9be2bdf703d27ca53edab10a825c2986b9aab44cb29800841e8c5dba5f\"" Jan 20 02:50:39.877331 containerd[1640]: time="2026-01-20T02:50:39.877223129Z" level=info msg="connecting to shim 4c6a6d9be2bdf703d27ca53edab10a825c2986b9aab44cb29800841e8c5dba5f" address="unix:///run/containerd/s/1abc3a894dda9216156d41b1bae1a1438922c3b0628a5974373353f0d393f686" protocol=ttrpc version=3 Jan 20 02:50:39.890197 systemd-networkd[1544]: cali0f709d2181a: Gained IPv6LL Jan 20 02:50:40.061829 systemd[1]: Started cri-containerd-4c6a6d9be2bdf703d27ca53edab10a825c2986b9aab44cb29800841e8c5dba5f.scope - libcontainer container 4c6a6d9be2bdf703d27ca53edab10a825c2986b9aab44cb29800841e8c5dba5f. Jan 20 02:50:40.109972 systemd-networkd[1544]: calic3427cb42c2: Link UP Jan 20 02:50:40.110813 systemd-networkd[1544]: calic3427cb42c2: Gained carrier Jan 20 02:50:40.219180 systemd[1]: Started cri-containerd-b9f3d9d99bdc570b2d630ce0072d79b533c2f553e8776e0cf0bf5c15db0d6663.scope - libcontainer container b9f3d9d99bdc570b2d630ce0072d79b533c2f553e8776e0cf0bf5c15db0d6663. Jan 20 02:50:40.228306 systemd[1]: Started cri-containerd-631ea0caf9b95ba2054068a47cf706665ac80082b2feb6ef8a03b36b345a4f96.scope - libcontainer container 631ea0caf9b95ba2054068a47cf706665ac80082b2feb6ef8a03b36b345a4f96. Jan 20 02:50:40.333000 audit: BPF prog-id=190 op=LOAD Jan 20 02:50:40.339000 audit: BPF prog-id=191 op=LOAD Jan 20 02:50:40.339000 audit[5789]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5590 pid=5789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:40.339000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239663364396439396264633537306232643633306365303037326437 Jan 20 02:50:40.339000 audit: BPF prog-id=191 op=UNLOAD Jan 20 02:50:40.339000 audit[5789]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5590 pid=5789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:40.339000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239663364396439396264633537306232643633306365303037326437 Jan 20 02:50:40.340000 audit: BPF prog-id=192 op=LOAD Jan 20 02:50:40.340000 audit[5789]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5590 pid=5789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:40.340000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239663364396439396264633537306232643633306365303037326437 Jan 20 02:50:40.340000 audit: BPF prog-id=193 op=LOAD Jan 20 02:50:40.340000 audit[5789]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5590 pid=5789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:40.340000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239663364396439396264633537306232643633306365303037326437 Jan 20 02:50:40.340000 audit: BPF prog-id=193 op=UNLOAD Jan 20 02:50:40.340000 audit[5789]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5590 pid=5789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:40.340000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239663364396439396264633537306232643633306365303037326437 Jan 20 02:50:40.340000 audit: BPF prog-id=192 op=UNLOAD Jan 20 02:50:40.340000 audit[5789]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5590 pid=5789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:40.340000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239663364396439396264633537306232643633306365303037326437 Jan 20 02:50:40.340000 audit: BPF prog-id=194 op=LOAD Jan 20 02:50:40.340000 audit[5789]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5590 pid=5789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:40.340000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239663364396439396264633537306232643633306365303037326437 Jan 20 02:50:40.407000 audit: BPF prog-id=195 op=LOAD Jan 20 02:50:40.410000 audit: BPF prog-id=196 op=LOAD Jan 20 02:50:40.410000 audit[5792]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c238 a2=98 a3=0 items=0 ppid=5777 pid=5792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:40.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633316561306361663962393562613230353430363861343763663730 Jan 20 02:50:40.410000 audit: BPF prog-id=196 op=UNLOAD Jan 20 02:50:40.410000 audit[5792]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5777 pid=5792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:40.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633316561306361663962393562613230353430363861343763663730 Jan 20 02:50:40.411000 audit: BPF prog-id=197 op=LOAD Jan 20 02:50:40.411000 audit[5792]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c488 a2=98 a3=0 items=0 ppid=5777 pid=5792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:40.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633316561306361663962393562613230353430363861343763663730 Jan 20 02:50:40.411000 audit: BPF prog-id=198 op=LOAD Jan 20 02:50:40.411000 audit[5792]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00018c218 a2=98 a3=0 items=0 ppid=5777 pid=5792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:40.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633316561306361663962393562613230353430363861343763663730 Jan 20 02:50:40.412000 audit: BPF prog-id=198 op=UNLOAD Jan 20 02:50:40.412000 audit[5792]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5777 pid=5792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:40.412000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633316561306361663962393562613230353430363861343763663730 Jan 20 02:50:40.412000 audit: BPF prog-id=197 op=UNLOAD Jan 20 02:50:40.412000 audit[5792]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5777 pid=5792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:40.412000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633316561306361663962393562613230353430363861343763663730 Jan 20 02:50:40.412000 audit: BPF prog-id=199 op=LOAD Jan 20 02:50:40.412000 audit[5792]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c6e8 a2=98 a3=0 items=0 ppid=5777 pid=5792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:40.412000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633316561306361663962393562613230353430363861343763663730 Jan 20 02:50:40.420190 containerd[1640]: 2026-01-20 02:50:37.196 [INFO][5466] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 02:50:40.420190 containerd[1640]: 2026-01-20 02:50:37.462 [INFO][5466] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--zb7gt-eth0 csi-node-driver- calico-system 2beb3373-3a79-403b-953d-80d6dc35b793 898 0 2026-01-20 02:48:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-zb7gt eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic3427cb42c2 [] [] }} ContainerID="802ee81c76844d246546e532fe81fe62117202abb2619d283f45fd4f5cf60961" Namespace="calico-system" Pod="csi-node-driver-zb7gt" WorkloadEndpoint="localhost-k8s-csi--node--driver--zb7gt-" Jan 20 02:50:40.420190 containerd[1640]: 2026-01-20 02:50:37.463 [INFO][5466] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="802ee81c76844d246546e532fe81fe62117202abb2619d283f45fd4f5cf60961" Namespace="calico-system" Pod="csi-node-driver-zb7gt" WorkloadEndpoint="localhost-k8s-csi--node--driver--zb7gt-eth0" Jan 20 02:50:40.420190 containerd[1640]: 2026-01-20 02:50:37.946 [INFO][5568] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="802ee81c76844d246546e532fe81fe62117202abb2619d283f45fd4f5cf60961" HandleID="k8s-pod-network.802ee81c76844d246546e532fe81fe62117202abb2619d283f45fd4f5cf60961" Workload="localhost-k8s-csi--node--driver--zb7gt-eth0" Jan 20 02:50:40.420190 containerd[1640]: 2026-01-20 02:50:37.946 [INFO][5568] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="802ee81c76844d246546e532fe81fe62117202abb2619d283f45fd4f5cf60961" HandleID="k8s-pod-network.802ee81c76844d246546e532fe81fe62117202abb2619d283f45fd4f5cf60961" Workload="localhost-k8s-csi--node--driver--zb7gt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004edf0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-zb7gt", "timestamp":"2026-01-20 02:50:37.946233632 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 02:50:40.420190 containerd[1640]: 2026-01-20 02:50:37.946 [INFO][5568] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 02:50:40.420190 containerd[1640]: 2026-01-20 02:50:39.256 [INFO][5568] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 02:50:40.420190 containerd[1640]: 2026-01-20 02:50:39.259 [INFO][5568] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 02:50:40.420190 containerd[1640]: 2026-01-20 02:50:39.293 [INFO][5568] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.802ee81c76844d246546e532fe81fe62117202abb2619d283f45fd4f5cf60961" host="localhost" Jan 20 02:50:40.420190 containerd[1640]: 2026-01-20 02:50:39.413 [INFO][5568] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 02:50:40.420190 containerd[1640]: 2026-01-20 02:50:39.533 [INFO][5568] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 02:50:40.420190 containerd[1640]: 2026-01-20 02:50:39.561 [INFO][5568] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 02:50:40.420190 containerd[1640]: 2026-01-20 02:50:39.616 [INFO][5568] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 02:50:40.420190 containerd[1640]: 2026-01-20 02:50:39.616 [INFO][5568] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.802ee81c76844d246546e532fe81fe62117202abb2619d283f45fd4f5cf60961" host="localhost" Jan 20 02:50:40.420190 containerd[1640]: 2026-01-20 02:50:39.661 [INFO][5568] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.802ee81c76844d246546e532fe81fe62117202abb2619d283f45fd4f5cf60961 Jan 20 02:50:40.420190 containerd[1640]: 2026-01-20 02:50:39.770 [INFO][5568] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.802ee81c76844d246546e532fe81fe62117202abb2619d283f45fd4f5cf60961" host="localhost" Jan 20 02:50:40.420190 containerd[1640]: 2026-01-20 02:50:39.888 [INFO][5568] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.802ee81c76844d246546e532fe81fe62117202abb2619d283f45fd4f5cf60961" host="localhost" Jan 20 02:50:40.420190 containerd[1640]: 2026-01-20 02:50:39.888 [INFO][5568] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.802ee81c76844d246546e532fe81fe62117202abb2619d283f45fd4f5cf60961" host="localhost" Jan 20 02:50:40.420190 containerd[1640]: 2026-01-20 02:50:39.888 [INFO][5568] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 02:50:40.420190 containerd[1640]: 2026-01-20 02:50:39.888 [INFO][5568] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="802ee81c76844d246546e532fe81fe62117202abb2619d283f45fd4f5cf60961" HandleID="k8s-pod-network.802ee81c76844d246546e532fe81fe62117202abb2619d283f45fd4f5cf60961" Workload="localhost-k8s-csi--node--driver--zb7gt-eth0" Jan 20 02:50:40.425207 containerd[1640]: 2026-01-20 02:50:39.984 [INFO][5466] cni-plugin/k8s.go 418: Populated endpoint ContainerID="802ee81c76844d246546e532fe81fe62117202abb2619d283f45fd4f5cf60961" Namespace="calico-system" Pod="csi-node-driver-zb7gt" WorkloadEndpoint="localhost-k8s-csi--node--driver--zb7gt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zb7gt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2beb3373-3a79-403b-953d-80d6dc35b793", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 2, 48, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-zb7gt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic3427cb42c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 02:50:40.425207 containerd[1640]: 2026-01-20 02:50:40.004 [INFO][5466] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="802ee81c76844d246546e532fe81fe62117202abb2619d283f45fd4f5cf60961" Namespace="calico-system" Pod="csi-node-driver-zb7gt" WorkloadEndpoint="localhost-k8s-csi--node--driver--zb7gt-eth0" Jan 20 02:50:40.425207 containerd[1640]: 2026-01-20 02:50:40.004 [INFO][5466] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic3427cb42c2 ContainerID="802ee81c76844d246546e532fe81fe62117202abb2619d283f45fd4f5cf60961" Namespace="calico-system" Pod="csi-node-driver-zb7gt" WorkloadEndpoint="localhost-k8s-csi--node--driver--zb7gt-eth0" Jan 20 02:50:40.425207 containerd[1640]: 2026-01-20 02:50:40.192 [INFO][5466] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="802ee81c76844d246546e532fe81fe62117202abb2619d283f45fd4f5cf60961" Namespace="calico-system" Pod="csi-node-driver-zb7gt" WorkloadEndpoint="localhost-k8s-csi--node--driver--zb7gt-eth0" Jan 20 02:50:40.425207 containerd[1640]: 2026-01-20 02:50:40.194 [INFO][5466] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="802ee81c76844d246546e532fe81fe62117202abb2619d283f45fd4f5cf60961" Namespace="calico-system" Pod="csi-node-driver-zb7gt" WorkloadEndpoint="localhost-k8s-csi--node--driver--zb7gt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zb7gt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2beb3373-3a79-403b-953d-80d6dc35b793", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 2, 48, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"802ee81c76844d246546e532fe81fe62117202abb2619d283f45fd4f5cf60961", Pod:"csi-node-driver-zb7gt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic3427cb42c2", MAC:"46:10:59:e0:b9:a1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 02:50:40.425207 containerd[1640]: 2026-01-20 02:50:40.337 [INFO][5466] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="802ee81c76844d246546e532fe81fe62117202abb2619d283f45fd4f5cf60961" Namespace="calico-system" Pod="csi-node-driver-zb7gt" WorkloadEndpoint="localhost-k8s-csi--node--driver--zb7gt-eth0" Jan 20 02:50:40.428442 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 02:50:40.435000 audit: BPF prog-id=200 op=LOAD Jan 20 02:50:40.441000 audit: BPF prog-id=201 op=LOAD Jan 20 02:50:40.441000 audit[5793]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5638 pid=5793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:40.441000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463366136643962653262646637303364323763613533656461623130 Jan 20 02:50:40.442000 audit: BPF prog-id=201 op=UNLOAD Jan 20 02:50:40.442000 audit[5793]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5638 pid=5793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:40.442000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463366136643962653262646637303364323763613533656461623130 Jan 20 02:50:40.446000 audit: BPF prog-id=202 op=LOAD Jan 20 02:50:40.446000 audit[5793]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5638 pid=5793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:40.446000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463366136643962653262646637303364323763613533656461623130 Jan 20 02:50:40.447000 audit: BPF prog-id=203 op=LOAD Jan 20 02:50:40.447000 audit[5793]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5638 pid=5793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:40.447000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463366136643962653262646637303364323763613533656461623130 Jan 20 02:50:40.452000 audit: BPF prog-id=203 op=UNLOAD Jan 20 02:50:40.452000 audit[5793]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5638 pid=5793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:40.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463366136643962653262646637303364323763613533656461623130 Jan 20 02:50:40.452000 audit: BPF prog-id=202 op=UNLOAD Jan 20 02:50:40.452000 audit[5793]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5638 pid=5793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:40.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463366136643962653262646637303364323763613533656461623130 Jan 20 02:50:40.455000 audit: BPF prog-id=204 op=LOAD Jan 20 02:50:40.455000 audit[5793]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5638 pid=5793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:40.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463366136643962653262646637303364323763613533656461623130 Jan 20 02:50:40.625415 systemd-networkd[1544]: cali33b3ec6446e: Link UP Jan 20 02:50:40.634341 systemd-networkd[1544]: cali33b3ec6446e: Gained carrier Jan 20 02:50:40.740453 containerd[1640]: time="2026-01-20T02:50:40.740324585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68fcfd7799-l9qd2,Uid:ea0ad3c0-ee09-401c-8807-5b06e8d22025,Namespace:calico-system,Attempt:0,} returns sandbox id \"0331213baf44381cbe64cfdd73f03274771822323f6a0252f1fc3946557c63de\"" Jan 20 02:50:40.779741 containerd[1640]: time="2026-01-20T02:50:40.771913049Z" level=info msg="StartContainer for \"b9f3d9d99bdc570b2d630ce0072d79b533c2f553e8776e0cf0bf5c15db0d6663\" returns successfully" Jan 20 02:50:40.794866 containerd[1640]: time="2026-01-20T02:50:40.794812144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 02:50:40.796889 containerd[1640]: 2026-01-20 02:50:36.992 [INFO][5453] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 02:50:40.796889 containerd[1640]: 2026-01-20 02:50:37.331 [INFO][5453] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--554b6967f8--4mv9r-eth0 calico-kube-controllers-554b6967f8- calico-system 9eab50e8-9c7c-4942-9bf1-628e8f6481c8 1133 0 2026-01-20 02:48:14 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:554b6967f8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-554b6967f8-4mv9r eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali33b3ec6446e [] [] }} ContainerID="c95f893b0b626594c8fd1bebee9e6dd5568a21608e838d9ef6c0b27756415671" Namespace="calico-system" Pod="calico-kube-controllers-554b6967f8-4mv9r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--554b6967f8--4mv9r-" Jan 20 02:50:40.796889 containerd[1640]: 2026-01-20 02:50:37.331 [INFO][5453] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c95f893b0b626594c8fd1bebee9e6dd5568a21608e838d9ef6c0b27756415671" Namespace="calico-system" Pod="calico-kube-controllers-554b6967f8-4mv9r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--554b6967f8--4mv9r-eth0" Jan 20 02:50:40.796889 containerd[1640]: 2026-01-20 02:50:37.989 [INFO][5540] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c95f893b0b626594c8fd1bebee9e6dd5568a21608e838d9ef6c0b27756415671" HandleID="k8s-pod-network.c95f893b0b626594c8fd1bebee9e6dd5568a21608e838d9ef6c0b27756415671" Workload="localhost-k8s-calico--kube--controllers--554b6967f8--4mv9r-eth0" Jan 20 02:50:40.796889 containerd[1640]: 2026-01-20 02:50:37.989 [INFO][5540] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c95f893b0b626594c8fd1bebee9e6dd5568a21608e838d9ef6c0b27756415671" HandleID="k8s-pod-network.c95f893b0b626594c8fd1bebee9e6dd5568a21608e838d9ef6c0b27756415671" Workload="localhost-k8s-calico--kube--controllers--554b6967f8--4mv9r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad1d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-554b6967f8-4mv9r", "timestamp":"2026-01-20 02:50:37.989009754 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 02:50:40.796889 containerd[1640]: 2026-01-20 02:50:37.989 [INFO][5540] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 02:50:40.796889 containerd[1640]: 2026-01-20 02:50:39.895 [INFO][5540] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 02:50:40.796889 containerd[1640]: 2026-01-20 02:50:39.896 [INFO][5540] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 02:50:40.796889 containerd[1640]: 2026-01-20 02:50:39.993 [INFO][5540] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c95f893b0b626594c8fd1bebee9e6dd5568a21608e838d9ef6c0b27756415671" host="localhost" Jan 20 02:50:40.796889 containerd[1640]: 2026-01-20 02:50:40.093 [INFO][5540] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 02:50:40.796889 containerd[1640]: 2026-01-20 02:50:40.287 [INFO][5540] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 02:50:40.796889 containerd[1640]: 2026-01-20 02:50:40.301 [INFO][5540] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 02:50:40.796889 containerd[1640]: 2026-01-20 02:50:40.359 [INFO][5540] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 02:50:40.796889 containerd[1640]: 2026-01-20 02:50:40.366 [INFO][5540] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c95f893b0b626594c8fd1bebee9e6dd5568a21608e838d9ef6c0b27756415671" host="localhost" Jan 20 02:50:40.796889 containerd[1640]: 2026-01-20 02:50:40.390 [INFO][5540] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c95f893b0b626594c8fd1bebee9e6dd5568a21608e838d9ef6c0b27756415671 Jan 20 02:50:40.796889 containerd[1640]: 2026-01-20 02:50:40.450 [INFO][5540] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c95f893b0b626594c8fd1bebee9e6dd5568a21608e838d9ef6c0b27756415671" host="localhost" Jan 20 02:50:40.796889 containerd[1640]: 2026-01-20 02:50:40.548 [INFO][5540] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.c95f893b0b626594c8fd1bebee9e6dd5568a21608e838d9ef6c0b27756415671" host="localhost" Jan 20 02:50:40.796889 containerd[1640]: 2026-01-20 02:50:40.551 [INFO][5540] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.c95f893b0b626594c8fd1bebee9e6dd5568a21608e838d9ef6c0b27756415671" host="localhost" Jan 20 02:50:40.796889 containerd[1640]: 2026-01-20 02:50:40.551 [INFO][5540] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 02:50:40.796889 containerd[1640]: 2026-01-20 02:50:40.551 [INFO][5540] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="c95f893b0b626594c8fd1bebee9e6dd5568a21608e838d9ef6c0b27756415671" HandleID="k8s-pod-network.c95f893b0b626594c8fd1bebee9e6dd5568a21608e838d9ef6c0b27756415671" Workload="localhost-k8s-calico--kube--controllers--554b6967f8--4mv9r-eth0" Jan 20 02:50:40.803559 containerd[1640]: 2026-01-20 02:50:40.584 [INFO][5453] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c95f893b0b626594c8fd1bebee9e6dd5568a21608e838d9ef6c0b27756415671" Namespace="calico-system" Pod="calico-kube-controllers-554b6967f8-4mv9r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--554b6967f8--4mv9r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--554b6967f8--4mv9r-eth0", GenerateName:"calico-kube-controllers-554b6967f8-", Namespace:"calico-system", SelfLink:"", UID:"9eab50e8-9c7c-4942-9bf1-628e8f6481c8", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 2, 48, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"554b6967f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-554b6967f8-4mv9r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali33b3ec6446e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 02:50:40.803559 containerd[1640]: 2026-01-20 02:50:40.585 [INFO][5453] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="c95f893b0b626594c8fd1bebee9e6dd5568a21608e838d9ef6c0b27756415671" Namespace="calico-system" Pod="calico-kube-controllers-554b6967f8-4mv9r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--554b6967f8--4mv9r-eth0" Jan 20 02:50:40.803559 containerd[1640]: 2026-01-20 02:50:40.585 [INFO][5453] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali33b3ec6446e ContainerID="c95f893b0b626594c8fd1bebee9e6dd5568a21608e838d9ef6c0b27756415671" Namespace="calico-system" Pod="calico-kube-controllers-554b6967f8-4mv9r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--554b6967f8--4mv9r-eth0" Jan 20 02:50:40.803559 containerd[1640]: 2026-01-20 02:50:40.640 [INFO][5453] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c95f893b0b626594c8fd1bebee9e6dd5568a21608e838d9ef6c0b27756415671" Namespace="calico-system" Pod="calico-kube-controllers-554b6967f8-4mv9r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--554b6967f8--4mv9r-eth0" Jan 20 02:50:40.803559 containerd[1640]: 2026-01-20 02:50:40.642 [INFO][5453] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c95f893b0b626594c8fd1bebee9e6dd5568a21608e838d9ef6c0b27756415671" Namespace="calico-system" Pod="calico-kube-controllers-554b6967f8-4mv9r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--554b6967f8--4mv9r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--554b6967f8--4mv9r-eth0", GenerateName:"calico-kube-controllers-554b6967f8-", Namespace:"calico-system", SelfLink:"", UID:"9eab50e8-9c7c-4942-9bf1-628e8f6481c8", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 2, 48, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"554b6967f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c95f893b0b626594c8fd1bebee9e6dd5568a21608e838d9ef6c0b27756415671", Pod:"calico-kube-controllers-554b6967f8-4mv9r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali33b3ec6446e", MAC:"1a:06:71:f9:ed:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 02:50:40.803559 containerd[1640]: 2026-01-20 02:50:40.765 [INFO][5453] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c95f893b0b626594c8fd1bebee9e6dd5568a21608e838d9ef6c0b27756415671" Namespace="calico-system" Pod="calico-kube-controllers-554b6967f8-4mv9r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--554b6967f8--4mv9r-eth0" Jan 20 02:50:40.803559 containerd[1640]: time="2026-01-20T02:50:40.800701244Z" level=info msg="StartContainer for \"4c6a6d9be2bdf703d27ca53edab10a825c2986b9aab44cb29800841e8c5dba5f\" returns successfully" Jan 20 02:50:40.874024 containerd[1640]: time="2026-01-20T02:50:40.872354805Z" level=info msg="connecting to shim 802ee81c76844d246546e532fe81fe62117202abb2619d283f45fd4f5cf60961" address="unix:///run/containerd/s/e1d1f93e3fd1b90ae676a2ceaa4ff2ef152e585c5529330c05556656924cc01f" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:50:40.913847 systemd-networkd[1544]: calif45fae8e2ed: Gained IPv6LL Jan 20 02:50:40.936912 kubelet[2963]: E0120 02:50:40.936626 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:40.971732 containerd[1640]: time="2026-01-20T02:50:40.971437021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-5hks8,Uid:2048147f-559b-4756-8896-b644ce0ae95e,Namespace:calico-system,Attempt:0,} returns sandbox id \"631ea0caf9b95ba2054068a47cf706665ac80082b2feb6ef8a03b36b345a4f96\"" Jan 20 02:50:41.018865 containerd[1640]: time="2026-01-20T02:50:41.017914819Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:50:41.044872 containerd[1640]: time="2026-01-20T02:50:41.042657876Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 02:50:41.048965 containerd[1640]: time="2026-01-20T02:50:41.048605217Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 02:50:41.058362 kubelet[2963]: E0120 02:50:41.058234 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 02:50:41.058362 kubelet[2963]: E0120 02:50:41.058339 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 02:50:41.062452 kubelet[2963]: E0120 02:50:41.058707 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-68fcfd7799-l9qd2_calico-system(ea0ad3c0-ee09-401c-8807-5b06e8d22025): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 02:50:41.090589 containerd[1640]: time="2026-01-20T02:50:41.089855869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 02:50:41.538884 containerd[1640]: time="2026-01-20T02:50:41.538680177Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:50:41.654644 containerd[1640]: time="2026-01-20T02:50:41.649084944Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 02:50:41.654644 containerd[1640]: time="2026-01-20T02:50:41.649232528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 02:50:41.665455 systemd[1]: Started cri-containerd-802ee81c76844d246546e532fe81fe62117202abb2619d283f45fd4f5cf60961.scope - libcontainer container 802ee81c76844d246546e532fe81fe62117202abb2619d283f45fd4f5cf60961. Jan 20 02:50:41.682806 kubelet[2963]: E0120 02:50:41.665559 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 02:50:41.682806 kubelet[2963]: E0120 02:50:41.665615 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 02:50:41.682806 kubelet[2963]: E0120 02:50:41.665858 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-5hks8_calico-system(2048147f-559b-4756-8896-b644ce0ae95e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 02:50:41.682806 kubelet[2963]: E0120 02:50:41.665904 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:50:41.748262 containerd[1640]: time="2026-01-20T02:50:41.742145488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 02:50:41.753398 systemd-networkd[1544]: cali33b3ec6446e: Gained IPv6LL Jan 20 02:50:41.783123 containerd[1640]: time="2026-01-20T02:50:41.782913294Z" level=info msg="connecting to shim c95f893b0b626594c8fd1bebee9e6dd5568a21608e838d9ef6c0b27756415671" address="unix:///run/containerd/s/eafab85d78c9a078804be3eba0bf4bb30dbc1de187cc4b72e072ff70cfa87e09" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:50:41.806582 containerd[1640]: time="2026-01-20T02:50:41.800190511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99b79f8fd-9fwc6,Uid:67615726-cef8-44da-a26c-7795f613fcbb,Namespace:calico-apiserver,Attempt:0,}" Jan 20 02:50:42.002000 audit[6000]: NETFILTER_CFG table=filter:119 family=2 entries=20 op=nft_register_rule pid=6000 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:50:42.002000 audit[6000]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc2bc6b600 a2=0 a3=7ffc2bc6b5ec items=0 ppid=3069 pid=6000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:42.002000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:50:42.024357 kubelet[2963]: E0120 02:50:42.024177 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:42.024000 audit[6000]: NETFILTER_CFG table=nat:120 family=2 entries=14 op=nft_register_rule pid=6000 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:50:42.024000 audit[6000]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc2bc6b600 a2=0 a3=0 items=0 ppid=3069 pid=6000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:42.024000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:50:42.042183 kubelet[2963]: E0120 02:50:42.038353 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:42.042183 kubelet[2963]: E0120 02:50:42.039431 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:50:42.066684 systemd-networkd[1544]: calic3427cb42c2: Gained IPv6LL Jan 20 02:50:42.130241 containerd[1640]: time="2026-01-20T02:50:42.129829782Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:50:42.139000 audit: BPF prog-id=205 op=LOAD Jan 20 02:50:42.150914 containerd[1640]: time="2026-01-20T02:50:42.150262287Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 02:50:42.150914 containerd[1640]: time="2026-01-20T02:50:42.150414278Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 02:50:42.150000 audit: BPF prog-id=206 op=LOAD Jan 20 02:50:42.150000 audit[5929]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5896 pid=5929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:42.150000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830326565383163373638343464323436353436653533326665383166 Jan 20 02:50:42.150000 audit: BPF prog-id=206 op=UNLOAD Jan 20 02:50:42.150000 audit[5929]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5896 pid=5929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:42.150000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830326565383163373638343464323436353436653533326665383166 Jan 20 02:50:42.150000 audit: BPF prog-id=207 op=LOAD Jan 20 02:50:42.150000 audit[5929]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5896 pid=5929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:42.150000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830326565383163373638343464323436353436653533326665383166 Jan 20 02:50:42.150000 audit: BPF prog-id=208 op=LOAD Jan 20 02:50:42.150000 audit[5929]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5896 pid=5929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:42.150000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830326565383163373638343464323436353436653533326665383166 Jan 20 02:50:42.150000 audit: BPF prog-id=208 op=UNLOAD Jan 20 02:50:42.150000 audit[5929]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5896 pid=5929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:42.150000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830326565383163373638343464323436353436653533326665383166 Jan 20 02:50:42.150000 audit: BPF prog-id=207 op=UNLOAD Jan 20 02:50:42.150000 audit[5929]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5896 pid=5929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:42.150000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830326565383163373638343464323436353436653533326665383166 Jan 20 02:50:42.150000 audit: BPF prog-id=209 op=LOAD Jan 20 02:50:42.150000 audit[5929]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5896 pid=5929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:42.150000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830326565383163373638343464323436353436653533326665383166 Jan 20 02:50:42.168964 kubelet[2963]: E0120 02:50:42.165710 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 02:50:42.168964 kubelet[2963]: E0120 02:50:42.165811 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 02:50:42.168964 kubelet[2963]: E0120 02:50:42.165905 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-68fcfd7799-l9qd2_calico-system(ea0ad3c0-ee09-401c-8807-5b06e8d22025): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 02:50:42.168964 kubelet[2963]: E0120 02:50:42.165959 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:50:42.218980 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 02:50:42.221996 systemd[1]: Started cri-containerd-c95f893b0b626594c8fd1bebee9e6dd5568a21608e838d9ef6c0b27756415671.scope - libcontainer container c95f893b0b626594c8fd1bebee9e6dd5568a21608e838d9ef6c0b27756415671. Jan 20 02:50:42.553643 kubelet[2963]: I0120 02:50:42.552173 2963 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-czxwc" podStartSLOduration=218.552153807 podStartE2EDuration="3m38.552153807s" podCreationTimestamp="2026-01-20 02:47:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:50:41.165976955 +0000 UTC m=+220.421043473" watchObservedRunningTime="2026-01-20 02:50:42.552153807 +0000 UTC m=+221.807220255" Jan 20 02:50:42.605000 audit: BPF prog-id=210 op=LOAD Jan 20 02:50:42.654000 audit: BPF prog-id=211 op=LOAD Jan 20 02:50:42.654000 audit[5980]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0000fe238 a2=98 a3=0 items=0 ppid=5952 pid=5980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:42.654000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339356638393362306236323635393463386664316265626565396536 Jan 20 02:50:42.654000 audit: BPF prog-id=211 op=UNLOAD Jan 20 02:50:42.654000 audit[5980]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5952 pid=5980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:42.654000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339356638393362306236323635393463386664316265626565396536 Jan 20 02:50:42.654000 audit: BPF prog-id=212 op=LOAD Jan 20 02:50:42.654000 audit[5980]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0000fe488 a2=98 a3=0 items=0 ppid=5952 pid=5980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:42.654000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339356638393362306236323635393463386664316265626565396536 Jan 20 02:50:42.654000 audit: BPF prog-id=213 op=LOAD Jan 20 02:50:42.654000 audit[5980]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0000fe218 a2=98 a3=0 items=0 ppid=5952 pid=5980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:42.654000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339356638393362306236323635393463386664316265626565396536 Jan 20 02:50:42.654000 audit: BPF prog-id=213 op=UNLOAD Jan 20 02:50:42.654000 audit[5980]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5952 pid=5980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:42.654000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339356638393362306236323635393463386664316265626565396536 Jan 20 02:50:42.654000 audit: BPF prog-id=212 op=UNLOAD Jan 20 02:50:42.654000 audit[5980]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5952 pid=5980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:42.654000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339356638393362306236323635393463386664316265626565396536 Jan 20 02:50:42.654000 audit: BPF prog-id=214 op=LOAD Jan 20 02:50:42.654000 audit[5980]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0000fe6e8 a2=98 a3=0 items=0 ppid=5952 pid=5980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:42.654000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339356638393362306236323635393463386664316265626565396536 Jan 20 02:50:42.672384 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 02:50:42.720000 audit: BPF prog-id=215 op=LOAD Jan 20 02:50:42.720000 audit[6050]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe51379ab0 a2=98 a3=1fffffffffffffff items=0 ppid=5497 pid=6050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:42.720000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 02:50:42.735000 audit: BPF prog-id=215 op=UNLOAD Jan 20 02:50:42.735000 audit[6050]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe51379a80 a3=0 items=0 ppid=5497 pid=6050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:42.735000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 02:50:42.735000 audit: BPF prog-id=216 op=LOAD Jan 20 02:50:42.735000 audit[6050]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe51379990 a2=94 a3=3 items=0 ppid=5497 pid=6050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:42.735000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 02:50:42.739000 audit: BPF prog-id=216 op=UNLOAD Jan 20 02:50:42.739000 audit[6050]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe51379990 a2=94 a3=3 items=0 ppid=5497 pid=6050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:42.739000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 02:50:42.739000 audit: BPF prog-id=217 op=LOAD Jan 20 02:50:42.739000 audit[6050]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe513799d0 a2=94 a3=7ffe51379bb0 items=0 ppid=5497 pid=6050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:42.739000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 02:50:42.740000 audit: BPF prog-id=217 op=UNLOAD Jan 20 02:50:42.740000 audit[6050]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe513799d0 a2=94 a3=7ffe51379bb0 items=0 ppid=5497 pid=6050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:42.740000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 02:50:42.751008 kubelet[2963]: E0120 02:50:42.750864 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:42.772530 containerd[1640]: time="2026-01-20T02:50:42.769676171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zb7gt,Uid:2beb3373-3a79-403b-953d-80d6dc35b793,Namespace:calico-system,Attempt:0,} returns sandbox id \"802ee81c76844d246546e532fe81fe62117202abb2619d283f45fd4f5cf60961\"" Jan 20 02:50:42.772530 containerd[1640]: time="2026-01-20T02:50:42.771821586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 02:50:42.857000 audit: BPF prog-id=218 op=LOAD Jan 20 02:50:42.857000 audit[6054]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff977f2670 a2=98 a3=3 items=0 ppid=5497 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:42.857000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 02:50:42.857000 audit: BPF prog-id=218 op=UNLOAD Jan 20 02:50:42.857000 audit[6054]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff977f2640 a3=0 items=0 ppid=5497 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:42.857000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 02:50:42.857000 audit: BPF prog-id=219 op=LOAD Jan 20 02:50:42.857000 audit[6054]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff977f2460 a2=94 a3=54428f items=0 ppid=5497 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:42.857000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 02:50:42.857000 audit: BPF prog-id=219 op=UNLOAD Jan 20 02:50:42.857000 audit[6054]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff977f2460 a2=94 a3=54428f items=0 ppid=5497 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:42.857000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 02:50:42.857000 audit: BPF prog-id=220 op=LOAD Jan 20 02:50:42.857000 audit[6054]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff977f2490 a2=94 a3=2 items=0 ppid=5497 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:42.857000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 02:50:42.857000 audit: BPF prog-id=220 op=UNLOAD Jan 20 02:50:42.857000 audit[6054]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff977f2490 a2=0 a3=2 items=0 ppid=5497 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:42.857000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 02:50:42.957277 containerd[1640]: time="2026-01-20T02:50:42.957223552Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:50:43.012916 containerd[1640]: time="2026-01-20T02:50:43.012851070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 02:50:43.038984 containerd[1640]: time="2026-01-20T02:50:43.014686558Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 02:50:43.039431 kubelet[2963]: E0120 02:50:43.039391 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 02:50:43.047649 kubelet[2963]: E0120 02:50:43.040029 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 02:50:43.047649 kubelet[2963]: E0120 02:50:43.046449 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-zb7gt_calico-system(2beb3373-3a79-403b-953d-80d6dc35b793): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 02:50:43.059900 containerd[1640]: time="2026-01-20T02:50:43.057421438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 02:50:43.094009 kubelet[2963]: E0120 02:50:43.085207 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:43.096323 kubelet[2963]: E0120 02:50:43.096152 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:43.106594 kubelet[2963]: E0120 02:50:43.104241 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:50:43.138650 kubelet[2963]: E0120 02:50:43.138405 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:50:43.337720 containerd[1640]: time="2026-01-20T02:50:43.336883620Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:50:43.363040 kubelet[2963]: I0120 02:50:43.361095 2963 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-xll5r" podStartSLOduration=219.361065959 podStartE2EDuration="3m39.361065959s" podCreationTimestamp="2026-01-20 02:47:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:50:42.787275767 +0000 UTC m=+222.042342255" watchObservedRunningTime="2026-01-20 02:50:43.361065959 +0000 UTC m=+222.616132407" Jan 20 02:50:43.365659 containerd[1640]: time="2026-01-20T02:50:43.364404312Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 02:50:43.367097 containerd[1640]: time="2026-01-20T02:50:43.365949916Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 02:50:43.370320 kubelet[2963]: E0120 02:50:43.369613 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 02:50:43.370320 kubelet[2963]: E0120 02:50:43.369693 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 02:50:43.370320 kubelet[2963]: E0120 02:50:43.369823 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-zb7gt_calico-system(2beb3373-3a79-403b-953d-80d6dc35b793): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 02:50:43.370320 kubelet[2963]: E0120 02:50:43.369876 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:50:43.394983 containerd[1640]: time="2026-01-20T02:50:43.386238385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-554b6967f8-4mv9r,Uid:9eab50e8-9c7c-4942-9bf1-628e8f6481c8,Namespace:calico-system,Attempt:0,} returns sandbox id \"c95f893b0b626594c8fd1bebee9e6dd5568a21608e838d9ef6c0b27756415671\"" Jan 20 02:50:43.407035 containerd[1640]: time="2026-01-20T02:50:43.404876918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 02:50:43.630715 containerd[1640]: time="2026-01-20T02:50:43.630557728Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:50:43.653927 containerd[1640]: time="2026-01-20T02:50:43.653284746Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 02:50:43.654093 containerd[1640]: time="2026-01-20T02:50:43.653460952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 02:50:43.655116 kubelet[2963]: E0120 02:50:43.654816 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 02:50:43.655116 kubelet[2963]: E0120 02:50:43.654886 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 02:50:43.655116 kubelet[2963]: E0120 02:50:43.654989 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-554b6967f8-4mv9r_calico-system(9eab50e8-9c7c-4942-9bf1-628e8f6481c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 02:50:43.655116 kubelet[2963]: E0120 02:50:43.655035 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:50:43.683464 kernel: kauditd_printk_skb: 208 callbacks suppressed Jan 20 02:50:43.683710 kernel: audit: type=1325 audit(1768877443.672:655): table=filter:121 family=2 entries=20 op=nft_register_rule pid=6069 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:50:43.672000 audit[6069]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=6069 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:50:43.672000 audit[6069]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd44f42740 a2=0 a3=7ffd44f4272c items=0 ppid=3069 pid=6069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:43.733564 kernel: audit: type=1300 audit(1768877443.672:655): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd44f42740 a2=0 a3=7ffd44f4272c items=0 ppid=3069 pid=6069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:43.733731 kernel: audit: type=1327 audit(1768877443.672:655): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:50:43.672000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:50:43.736059 kernel: audit: type=1325 audit(1768877443.733:656): table=nat:122 family=2 entries=14 op=nft_register_rule pid=6069 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:50:43.733000 audit[6069]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=6069 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:50:43.733000 audit[6069]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd44f42740 a2=0 a3=0 items=0 ppid=3069 pid=6069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:43.792085 kernel: audit: type=1300 audit(1768877443.733:656): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd44f42740 a2=0 a3=0 items=0 ppid=3069 pid=6069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:43.792148 containerd[1640]: time="2026-01-20T02:50:43.769797136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99b79f8fd-h8mhs,Uid:78de0405-4f44-497e-8007-519223ee3a61,Namespace:calico-apiserver,Attempt:0,}" Jan 20 02:50:43.812246 kernel: audit: type=1327 audit(1768877443.733:656): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:50:43.733000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:50:44.074680 kubelet[2963]: E0120 02:50:44.074642 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:44.083162 kubelet[2963]: E0120 02:50:44.076678 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:44.083534 kubelet[2963]: E0120 02:50:44.083378 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:50:44.099726 kubelet[2963]: E0120 02:50:44.094556 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:50:44.142331 systemd-networkd[1544]: cali7d17cac97fc: Link UP Jan 20 02:50:44.144643 systemd-networkd[1544]: cali7d17cac97fc: Gained carrier Jan 20 02:50:44.225000 audit: BPF prog-id=221 op=LOAD Jan 20 02:50:44.344380 kernel: audit: type=1334 audit(1768877444.225:657): prog-id=221 op=LOAD Jan 20 02:50:44.346379 kernel: audit: type=1300 audit(1768877444.225:657): arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff977f2350 a2=94 a3=1 items=0 ppid=5497 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:44.225000 audit[6054]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff977f2350 a2=94 a3=1 items=0 ppid=5497 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:44.426926 kernel: audit: type=1327 audit(1768877444.225:657): proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 02:50:44.225000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 02:50:44.225000 audit: BPF prog-id=221 op=UNLOAD Jan 20 02:50:44.481824 kernel: audit: type=1334 audit(1768877444.225:658): prog-id=221 op=UNLOAD Jan 20 02:50:44.225000 audit[6054]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff977f2350 a2=94 a3=1 items=0 ppid=5497 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:44.225000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 02:50:44.245000 audit: BPF prog-id=222 op=LOAD Jan 20 02:50:44.245000 audit[6054]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff977f2340 a2=94 a3=4 items=0 ppid=5497 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:44.245000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 02:50:44.245000 audit: BPF prog-id=222 op=UNLOAD Jan 20 02:50:44.245000 audit[6054]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff977f2340 a2=0 a3=4 items=0 ppid=5497 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:44.245000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 02:50:44.246000 audit: BPF prog-id=223 op=LOAD Jan 20 02:50:44.246000 audit[6054]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff977f21a0 a2=94 a3=5 items=0 ppid=5497 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:44.246000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 02:50:44.246000 audit: BPF prog-id=223 op=UNLOAD Jan 20 02:50:44.246000 audit[6054]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff977f21a0 a2=0 a3=5 items=0 ppid=5497 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:44.246000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 02:50:44.246000 audit: BPF prog-id=224 op=LOAD Jan 20 02:50:44.246000 audit[6054]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff977f23c0 a2=94 a3=6 items=0 ppid=5497 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:44.246000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 02:50:44.246000 audit: BPF prog-id=224 op=UNLOAD Jan 20 02:50:44.246000 audit[6054]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff977f23c0 a2=0 a3=6 items=0 ppid=5497 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:44.246000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 02:50:44.246000 audit: BPF prog-id=225 op=LOAD Jan 20 02:50:44.246000 audit[6054]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff977f1b70 a2=94 a3=88 items=0 ppid=5497 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:44.246000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 02:50:44.247000 audit: BPF prog-id=226 op=LOAD Jan 20 02:50:44.247000 audit[6054]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff977f19f0 a2=94 a3=2 items=0 ppid=5497 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:44.247000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 02:50:44.247000 audit: BPF prog-id=226 op=UNLOAD Jan 20 02:50:44.247000 audit[6054]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff977f1a20 a2=0 a3=7fff977f1b20 items=0 ppid=5497 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:44.247000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 02:50:44.248000 audit: BPF prog-id=225 op=UNLOAD Jan 20 02:50:44.248000 audit[6054]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=24a17d10 a2=0 a3=1f4924a34f5da84e items=0 ppid=5497 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:44.248000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 02:50:44.333000 audit: BPF prog-id=227 op=LOAD Jan 20 02:50:44.333000 audit[6089]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd4da48350 a2=98 a3=1999999999999999 items=0 ppid=5497 pid=6089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:44.333000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 02:50:44.333000 audit: BPF prog-id=227 op=UNLOAD Jan 20 02:50:44.333000 audit[6089]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd4da48320 a3=0 items=0 ppid=5497 pid=6089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:44.333000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 02:50:44.333000 audit: BPF prog-id=228 op=LOAD Jan 20 02:50:44.333000 audit[6089]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd4da48230 a2=94 a3=ffff items=0 ppid=5497 pid=6089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:44.333000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 02:50:44.333000 audit: BPF prog-id=228 op=UNLOAD Jan 20 02:50:44.333000 audit[6089]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd4da48230 a2=94 a3=ffff items=0 ppid=5497 pid=6089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:44.333000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 02:50:44.333000 audit: BPF prog-id=229 op=LOAD Jan 20 02:50:44.333000 audit[6089]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd4da48270 a2=94 a3=7ffd4da48450 items=0 ppid=5497 pid=6089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:44.333000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 02:50:44.483000 audit: BPF prog-id=229 op=UNLOAD Jan 20 02:50:44.483000 audit[6089]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd4da48270 a2=94 a3=7ffd4da48450 items=0 ppid=5497 pid=6089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:44.483000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 02:50:44.542620 containerd[1640]: 2026-01-20 02:50:42.537 [INFO][5985] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--99b79f8fd--9fwc6-eth0 calico-apiserver-99b79f8fd- calico-apiserver 67615726-cef8-44da-a26c-7795f613fcbb 1139 0 2026-01-20 02:47:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:99b79f8fd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-99b79f8fd-9fwc6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7d17cac97fc [] [] }} ContainerID="2d18d9b0c2471314f43034a3f1284a859655b8b6b1888ba7177e52159aac19ff" Namespace="calico-apiserver" Pod="calico-apiserver-99b79f8fd-9fwc6" WorkloadEndpoint="localhost-k8s-calico--apiserver--99b79f8fd--9fwc6-" Jan 20 02:50:44.542620 containerd[1640]: 2026-01-20 02:50:42.541 [INFO][5985] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2d18d9b0c2471314f43034a3f1284a859655b8b6b1888ba7177e52159aac19ff" Namespace="calico-apiserver" Pod="calico-apiserver-99b79f8fd-9fwc6" WorkloadEndpoint="localhost-k8s-calico--apiserver--99b79f8fd--9fwc6-eth0" Jan 20 02:50:44.542620 containerd[1640]: 2026-01-20 02:50:43.044 [INFO][6053] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2d18d9b0c2471314f43034a3f1284a859655b8b6b1888ba7177e52159aac19ff" HandleID="k8s-pod-network.2d18d9b0c2471314f43034a3f1284a859655b8b6b1888ba7177e52159aac19ff" Workload="localhost-k8s-calico--apiserver--99b79f8fd--9fwc6-eth0" Jan 20 02:50:44.542620 containerd[1640]: 2026-01-20 02:50:43.063 [INFO][6053] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2d18d9b0c2471314f43034a3f1284a859655b8b6b1888ba7177e52159aac19ff" HandleID="k8s-pod-network.2d18d9b0c2471314f43034a3f1284a859655b8b6b1888ba7177e52159aac19ff" Workload="localhost-k8s-calico--apiserver--99b79f8fd--9fwc6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00044e550), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-99b79f8fd-9fwc6", "timestamp":"2026-01-20 02:50:43.023146589 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 02:50:44.542620 containerd[1640]: 2026-01-20 02:50:43.063 [INFO][6053] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 02:50:44.542620 containerd[1640]: 2026-01-20 02:50:43.064 [INFO][6053] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 02:50:44.542620 containerd[1640]: 2026-01-20 02:50:43.064 [INFO][6053] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 02:50:44.542620 containerd[1640]: 2026-01-20 02:50:43.160 [INFO][6053] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2d18d9b0c2471314f43034a3f1284a859655b8b6b1888ba7177e52159aac19ff" host="localhost" Jan 20 02:50:44.542620 containerd[1640]: 2026-01-20 02:50:43.437 [INFO][6053] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 02:50:44.542620 containerd[1640]: 2026-01-20 02:50:43.549 [INFO][6053] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 02:50:44.542620 containerd[1640]: 2026-01-20 02:50:43.571 [INFO][6053] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 02:50:44.542620 containerd[1640]: 2026-01-20 02:50:43.739 [INFO][6053] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 02:50:44.542620 containerd[1640]: 2026-01-20 02:50:43.744 [INFO][6053] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2d18d9b0c2471314f43034a3f1284a859655b8b6b1888ba7177e52159aac19ff" host="localhost" Jan 20 02:50:44.542620 containerd[1640]: 2026-01-20 02:50:43.835 [INFO][6053] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2d18d9b0c2471314f43034a3f1284a859655b8b6b1888ba7177e52159aac19ff Jan 20 02:50:44.542620 containerd[1640]: 2026-01-20 02:50:43.962 [INFO][6053] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2d18d9b0c2471314f43034a3f1284a859655b8b6b1888ba7177e52159aac19ff" host="localhost" Jan 20 02:50:44.542620 containerd[1640]: 2026-01-20 02:50:44.108 [INFO][6053] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.2d18d9b0c2471314f43034a3f1284a859655b8b6b1888ba7177e52159aac19ff" host="localhost" Jan 20 02:50:44.542620 containerd[1640]: 2026-01-20 02:50:44.109 [INFO][6053] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.2d18d9b0c2471314f43034a3f1284a859655b8b6b1888ba7177e52159aac19ff" host="localhost" Jan 20 02:50:44.542620 containerd[1640]: 2026-01-20 02:50:44.110 [INFO][6053] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 02:50:44.542620 containerd[1640]: 2026-01-20 02:50:44.110 [INFO][6053] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="2d18d9b0c2471314f43034a3f1284a859655b8b6b1888ba7177e52159aac19ff" HandleID="k8s-pod-network.2d18d9b0c2471314f43034a3f1284a859655b8b6b1888ba7177e52159aac19ff" Workload="localhost-k8s-calico--apiserver--99b79f8fd--9fwc6-eth0" Jan 20 02:50:44.545919 containerd[1640]: 2026-01-20 02:50:44.133 [INFO][5985] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2d18d9b0c2471314f43034a3f1284a859655b8b6b1888ba7177e52159aac19ff" Namespace="calico-apiserver" Pod="calico-apiserver-99b79f8fd-9fwc6" WorkloadEndpoint="localhost-k8s-calico--apiserver--99b79f8fd--9fwc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--99b79f8fd--9fwc6-eth0", GenerateName:"calico-apiserver-99b79f8fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"67615726-cef8-44da-a26c-7795f613fcbb", ResourceVersion:"1139", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 2, 47, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"99b79f8fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-99b79f8fd-9fwc6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7d17cac97fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 02:50:44.545919 containerd[1640]: 2026-01-20 02:50:44.133 [INFO][5985] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="2d18d9b0c2471314f43034a3f1284a859655b8b6b1888ba7177e52159aac19ff" Namespace="calico-apiserver" Pod="calico-apiserver-99b79f8fd-9fwc6" WorkloadEndpoint="localhost-k8s-calico--apiserver--99b79f8fd--9fwc6-eth0" Jan 20 02:50:44.545919 containerd[1640]: 2026-01-20 02:50:44.133 [INFO][5985] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7d17cac97fc ContainerID="2d18d9b0c2471314f43034a3f1284a859655b8b6b1888ba7177e52159aac19ff" Namespace="calico-apiserver" Pod="calico-apiserver-99b79f8fd-9fwc6" WorkloadEndpoint="localhost-k8s-calico--apiserver--99b79f8fd--9fwc6-eth0" Jan 20 02:50:44.545919 containerd[1640]: 2026-01-20 02:50:44.147 [INFO][5985] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2d18d9b0c2471314f43034a3f1284a859655b8b6b1888ba7177e52159aac19ff" Namespace="calico-apiserver" Pod="calico-apiserver-99b79f8fd-9fwc6" WorkloadEndpoint="localhost-k8s-calico--apiserver--99b79f8fd--9fwc6-eth0" Jan 20 02:50:44.545919 containerd[1640]: 2026-01-20 02:50:44.150 [INFO][5985] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2d18d9b0c2471314f43034a3f1284a859655b8b6b1888ba7177e52159aac19ff" Namespace="calico-apiserver" Pod="calico-apiserver-99b79f8fd-9fwc6" WorkloadEndpoint="localhost-k8s-calico--apiserver--99b79f8fd--9fwc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--99b79f8fd--9fwc6-eth0", GenerateName:"calico-apiserver-99b79f8fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"67615726-cef8-44da-a26c-7795f613fcbb", ResourceVersion:"1139", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 2, 47, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"99b79f8fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2d18d9b0c2471314f43034a3f1284a859655b8b6b1888ba7177e52159aac19ff", Pod:"calico-apiserver-99b79f8fd-9fwc6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7d17cac97fc", MAC:"f6:d3:b4:be:57:ac", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 02:50:44.545919 containerd[1640]: 2026-01-20 02:50:44.529 [INFO][5985] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2d18d9b0c2471314f43034a3f1284a859655b8b6b1888ba7177e52159aac19ff" Namespace="calico-apiserver" Pod="calico-apiserver-99b79f8fd-9fwc6" WorkloadEndpoint="localhost-k8s-calico--apiserver--99b79f8fd--9fwc6-eth0" Jan 20 02:50:45.290421 kubelet[2963]: E0120 02:50:45.283750 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:50:45.587000 audit[6114]: NETFILTER_CFG table=filter:123 family=2 entries=17 op=nft_register_rule pid=6114 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:50:45.587000 audit[6114]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff69cfb790 a2=0 a3=7fff69cfb77c items=0 ppid=3069 pid=6114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:45.587000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:50:45.747260 kubelet[2963]: E0120 02:50:45.735094 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:45.747393 containerd[1640]: time="2026-01-20T02:50:45.743880197Z" level=info msg="connecting to shim 2d18d9b0c2471314f43034a3f1284a859655b8b6b1888ba7177e52159aac19ff" address="unix:///run/containerd/s/400fde7afd2306379c1b0b19b74d1df99ddda0b18f8e205cc10b7a77bd63c5d6" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:50:45.912000 audit[6114]: NETFILTER_CFG table=nat:124 family=2 entries=47 op=nft_register_chain pid=6114 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:50:45.912000 audit[6114]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fff69cfb790 a2=0 a3=7fff69cfb77c items=0 ppid=3069 pid=6114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:45.912000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:50:45.983225 systemd-networkd[1544]: cali7d17cac97fc: Gained IPv6LL Jan 20 02:50:46.292418 systemd[1]: Started cri-containerd-2d18d9b0c2471314f43034a3f1284a859655b8b6b1888ba7177e52159aac19ff.scope - libcontainer container 2d18d9b0c2471314f43034a3f1284a859655b8b6b1888ba7177e52159aac19ff. Jan 20 02:50:46.359000 audit: BPF prog-id=230 op=LOAD Jan 20 02:50:46.360000 audit: BPF prog-id=231 op=LOAD Jan 20 02:50:46.360000 audit[6135]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=6123 pid=6135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:46.360000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264313864396230633234373133313466343330333461336631323834 Jan 20 02:50:46.361000 audit: BPF prog-id=231 op=UNLOAD Jan 20 02:50:46.361000 audit[6135]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6123 pid=6135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:46.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264313864396230633234373133313466343330333461336631323834 Jan 20 02:50:46.362000 audit: BPF prog-id=232 op=LOAD Jan 20 02:50:46.362000 audit[6135]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=6123 pid=6135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:46.362000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264313864396230633234373133313466343330333461336631323834 Jan 20 02:50:46.363000 audit: BPF prog-id=233 op=LOAD Jan 20 02:50:46.363000 audit[6135]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=6123 pid=6135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:46.363000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264313864396230633234373133313466343330333461336631323834 Jan 20 02:50:46.363000 audit: BPF prog-id=233 op=UNLOAD Jan 20 02:50:46.363000 audit[6135]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6123 pid=6135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:46.363000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264313864396230633234373133313466343330333461336631323834 Jan 20 02:50:46.363000 audit: BPF prog-id=232 op=UNLOAD Jan 20 02:50:46.363000 audit[6135]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6123 pid=6135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:46.363000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264313864396230633234373133313466343330333461336631323834 Jan 20 02:50:46.364000 audit: BPF prog-id=234 op=LOAD Jan 20 02:50:46.364000 audit[6135]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=6123 pid=6135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:46.364000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264313864396230633234373133313466343330333461336631323834 Jan 20 02:50:46.407072 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 02:50:46.792408 containerd[1640]: time="2026-01-20T02:50:46.792010222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99b79f8fd-9fwc6,Uid:67615726-cef8-44da-a26c-7795f613fcbb,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2d18d9b0c2471314f43034a3f1284a859655b8b6b1888ba7177e52159aac19ff\"" Jan 20 02:50:46.793168 systemd-networkd[1544]: vxlan.calico: Link UP Jan 20 02:50:46.793175 systemd-networkd[1544]: vxlan.calico: Gained carrier Jan 20 02:50:46.806365 containerd[1640]: time="2026-01-20T02:50:46.805972488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:50:46.985592 containerd[1640]: time="2026-01-20T02:50:46.982284321Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:50:47.003389 containerd[1640]: time="2026-01-20T02:50:47.002260852Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:50:47.003389 containerd[1640]: time="2026-01-20T02:50:47.002403295Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:50:47.011585 systemd-networkd[1544]: cali012089bd691: Link UP Jan 20 02:50:47.013606 kubelet[2963]: E0120 02:50:47.012727 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:50:47.013606 kubelet[2963]: E0120 02:50:47.012834 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:50:47.013606 kubelet[2963]: E0120 02:50:47.012963 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-99b79f8fd-9fwc6_calico-apiserver(67615726-cef8-44da-a26c-7795f613fcbb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:50:47.013606 kubelet[2963]: E0120 02:50:47.013008 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:50:47.021957 systemd-networkd[1544]: cali012089bd691: Gained carrier Jan 20 02:50:47.098000 audit: BPF prog-id=235 op=LOAD Jan 20 02:50:47.098000 audit[6175]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc38305a20 a2=98 a3=0 items=0 ppid=5497 pid=6175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:47.098000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 02:50:47.098000 audit: BPF prog-id=235 op=UNLOAD Jan 20 02:50:47.098000 audit[6175]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc383059f0 a3=0 items=0 ppid=5497 pid=6175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:47.098000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 02:50:47.102000 audit: BPF prog-id=236 op=LOAD Jan 20 02:50:47.102000 audit[6175]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc38305830 a2=94 a3=54428f items=0 ppid=5497 pid=6175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:47.102000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 02:50:47.102000 audit: BPF prog-id=236 op=UNLOAD Jan 20 02:50:47.102000 audit[6175]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc38305830 a2=94 a3=54428f items=0 ppid=5497 pid=6175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:47.102000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 02:50:47.102000 audit: BPF prog-id=237 op=LOAD Jan 20 02:50:47.102000 audit[6175]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc38305860 a2=94 a3=2 items=0 ppid=5497 pid=6175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:47.102000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 02:50:47.102000 audit: BPF prog-id=237 op=UNLOAD Jan 20 02:50:47.102000 audit[6175]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc38305860 a2=0 a3=2 items=0 ppid=5497 pid=6175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:47.102000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 02:50:47.102000 audit: BPF prog-id=238 op=LOAD Jan 20 02:50:47.102000 audit[6175]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc38305610 a2=94 a3=4 items=0 ppid=5497 pid=6175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:47.102000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 02:50:47.103000 audit: BPF prog-id=238 op=UNLOAD Jan 20 02:50:47.103000 audit[6175]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc38305610 a2=94 a3=4 items=0 ppid=5497 pid=6175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:47.103000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 02:50:47.103000 audit: BPF prog-id=239 op=LOAD Jan 20 02:50:47.103000 audit[6175]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc38305710 a2=94 a3=7ffc38305890 items=0 ppid=5497 pid=6175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:47.103000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 02:50:47.103000 audit: BPF prog-id=239 op=UNLOAD Jan 20 02:50:47.103000 audit[6175]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc38305710 a2=0 a3=7ffc38305890 items=0 ppid=5497 pid=6175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:47.103000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 02:50:47.104000 audit: BPF prog-id=240 op=LOAD Jan 20 02:50:47.104000 audit[6175]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc38304e40 a2=94 a3=2 items=0 ppid=5497 pid=6175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:47.104000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 02:50:47.104000 audit: BPF prog-id=240 op=UNLOAD Jan 20 02:50:47.104000 audit[6175]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc38304e40 a2=0 a3=2 items=0 ppid=5497 pid=6175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:47.104000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 02:50:47.104000 audit: BPF prog-id=241 op=LOAD Jan 20 02:50:47.104000 audit[6175]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc38304f40 a2=94 a3=30 items=0 ppid=5497 pid=6175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:47.104000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 02:50:47.153605 containerd[1640]: 2026-01-20 02:50:44.755 [INFO][6072] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--99b79f8fd--h8mhs-eth0 calico-apiserver-99b79f8fd- calico-apiserver 78de0405-4f44-497e-8007-519223ee3a61 1137 0 2026-01-20 02:47:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:99b79f8fd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-99b79f8fd-h8mhs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali012089bd691 [] [] }} ContainerID="68bf5dadbd87a719294f61bd0fe3be88ef9de4cf8b71d32ae6fd2afbe8f33a7b" Namespace="calico-apiserver" Pod="calico-apiserver-99b79f8fd-h8mhs" WorkloadEndpoint="localhost-k8s-calico--apiserver--99b79f8fd--h8mhs-" Jan 20 02:50:47.153605 containerd[1640]: 2026-01-20 02:50:44.755 [INFO][6072] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="68bf5dadbd87a719294f61bd0fe3be88ef9de4cf8b71d32ae6fd2afbe8f33a7b" Namespace="calico-apiserver" Pod="calico-apiserver-99b79f8fd-h8mhs" WorkloadEndpoint="localhost-k8s-calico--apiserver--99b79f8fd--h8mhs-eth0" Jan 20 02:50:47.153605 containerd[1640]: 2026-01-20 02:50:46.354 [INFO][6108] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="68bf5dadbd87a719294f61bd0fe3be88ef9de4cf8b71d32ae6fd2afbe8f33a7b" HandleID="k8s-pod-network.68bf5dadbd87a719294f61bd0fe3be88ef9de4cf8b71d32ae6fd2afbe8f33a7b" Workload="localhost-k8s-calico--apiserver--99b79f8fd--h8mhs-eth0" Jan 20 02:50:47.153605 containerd[1640]: 2026-01-20 02:50:46.354 [INFO][6108] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="68bf5dadbd87a719294f61bd0fe3be88ef9de4cf8b71d32ae6fd2afbe8f33a7b" HandleID="k8s-pod-network.68bf5dadbd87a719294f61bd0fe3be88ef9de4cf8b71d32ae6fd2afbe8f33a7b" Workload="localhost-k8s-calico--apiserver--99b79f8fd--h8mhs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003d0010), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-99b79f8fd-h8mhs", "timestamp":"2026-01-20 02:50:46.354284018 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 02:50:47.153605 containerd[1640]: 2026-01-20 02:50:46.354 [INFO][6108] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 02:50:47.153605 containerd[1640]: 2026-01-20 02:50:46.354 [INFO][6108] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 02:50:47.153605 containerd[1640]: 2026-01-20 02:50:46.354 [INFO][6108] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 02:50:47.153605 containerd[1640]: 2026-01-20 02:50:46.443 [INFO][6108] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.68bf5dadbd87a719294f61bd0fe3be88ef9de4cf8b71d32ae6fd2afbe8f33a7b" host="localhost" Jan 20 02:50:47.153605 containerd[1640]: 2026-01-20 02:50:46.575 [INFO][6108] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 02:50:47.153605 containerd[1640]: 2026-01-20 02:50:46.702 [INFO][6108] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 02:50:47.153605 containerd[1640]: 2026-01-20 02:50:46.759 [INFO][6108] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 02:50:47.153605 containerd[1640]: 2026-01-20 02:50:46.811 [INFO][6108] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 02:50:47.153605 containerd[1640]: 2026-01-20 02:50:46.815 [INFO][6108] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.68bf5dadbd87a719294f61bd0fe3be88ef9de4cf8b71d32ae6fd2afbe8f33a7b" host="localhost" Jan 20 02:50:47.153605 containerd[1640]: 2026-01-20 02:50:46.853 [INFO][6108] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.68bf5dadbd87a719294f61bd0fe3be88ef9de4cf8b71d32ae6fd2afbe8f33a7b Jan 20 02:50:47.153605 containerd[1640]: 2026-01-20 02:50:46.892 [INFO][6108] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.68bf5dadbd87a719294f61bd0fe3be88ef9de4cf8b71d32ae6fd2afbe8f33a7b" host="localhost" Jan 20 02:50:47.153605 containerd[1640]: 2026-01-20 02:50:46.958 [INFO][6108] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.68bf5dadbd87a719294f61bd0fe3be88ef9de4cf8b71d32ae6fd2afbe8f33a7b" host="localhost" Jan 20 02:50:47.153605 containerd[1640]: 2026-01-20 02:50:46.958 [INFO][6108] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.68bf5dadbd87a719294f61bd0fe3be88ef9de4cf8b71d32ae6fd2afbe8f33a7b" host="localhost" Jan 20 02:50:47.153605 containerd[1640]: 2026-01-20 02:50:46.958 [INFO][6108] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 02:50:47.153605 containerd[1640]: 2026-01-20 02:50:46.958 [INFO][6108] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="68bf5dadbd87a719294f61bd0fe3be88ef9de4cf8b71d32ae6fd2afbe8f33a7b" HandleID="k8s-pod-network.68bf5dadbd87a719294f61bd0fe3be88ef9de4cf8b71d32ae6fd2afbe8f33a7b" Workload="localhost-k8s-calico--apiserver--99b79f8fd--h8mhs-eth0" Jan 20 02:50:47.157241 containerd[1640]: 2026-01-20 02:50:46.970 [INFO][6072] cni-plugin/k8s.go 418: Populated endpoint ContainerID="68bf5dadbd87a719294f61bd0fe3be88ef9de4cf8b71d32ae6fd2afbe8f33a7b" Namespace="calico-apiserver" Pod="calico-apiserver-99b79f8fd-h8mhs" WorkloadEndpoint="localhost-k8s-calico--apiserver--99b79f8fd--h8mhs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--99b79f8fd--h8mhs-eth0", GenerateName:"calico-apiserver-99b79f8fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"78de0405-4f44-497e-8007-519223ee3a61", ResourceVersion:"1137", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 2, 47, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"99b79f8fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-99b79f8fd-h8mhs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali012089bd691", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 02:50:47.157241 containerd[1640]: 2026-01-20 02:50:46.971 [INFO][6072] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="68bf5dadbd87a719294f61bd0fe3be88ef9de4cf8b71d32ae6fd2afbe8f33a7b" Namespace="calico-apiserver" Pod="calico-apiserver-99b79f8fd-h8mhs" WorkloadEndpoint="localhost-k8s-calico--apiserver--99b79f8fd--h8mhs-eth0" Jan 20 02:50:47.157241 containerd[1640]: 2026-01-20 02:50:46.971 [INFO][6072] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali012089bd691 ContainerID="68bf5dadbd87a719294f61bd0fe3be88ef9de4cf8b71d32ae6fd2afbe8f33a7b" Namespace="calico-apiserver" Pod="calico-apiserver-99b79f8fd-h8mhs" WorkloadEndpoint="localhost-k8s-calico--apiserver--99b79f8fd--h8mhs-eth0" Jan 20 02:50:47.157241 containerd[1640]: 2026-01-20 02:50:46.975 [INFO][6072] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="68bf5dadbd87a719294f61bd0fe3be88ef9de4cf8b71d32ae6fd2afbe8f33a7b" Namespace="calico-apiserver" Pod="calico-apiserver-99b79f8fd-h8mhs" WorkloadEndpoint="localhost-k8s-calico--apiserver--99b79f8fd--h8mhs-eth0" Jan 20 02:50:47.157241 containerd[1640]: 2026-01-20 02:50:46.993 [INFO][6072] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="68bf5dadbd87a719294f61bd0fe3be88ef9de4cf8b71d32ae6fd2afbe8f33a7b" Namespace="calico-apiserver" Pod="calico-apiserver-99b79f8fd-h8mhs" WorkloadEndpoint="localhost-k8s-calico--apiserver--99b79f8fd--h8mhs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--99b79f8fd--h8mhs-eth0", GenerateName:"calico-apiserver-99b79f8fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"78de0405-4f44-497e-8007-519223ee3a61", ResourceVersion:"1137", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 2, 47, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"99b79f8fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"68bf5dadbd87a719294f61bd0fe3be88ef9de4cf8b71d32ae6fd2afbe8f33a7b", Pod:"calico-apiserver-99b79f8fd-h8mhs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali012089bd691", MAC:"3a:89:fa:9f:44:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 02:50:47.157241 containerd[1640]: 2026-01-20 02:50:47.089 [INFO][6072] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="68bf5dadbd87a719294f61bd0fe3be88ef9de4cf8b71d32ae6fd2afbe8f33a7b" Namespace="calico-apiserver" Pod="calico-apiserver-99b79f8fd-h8mhs" WorkloadEndpoint="localhost-k8s-calico--apiserver--99b79f8fd--h8mhs-eth0" Jan 20 02:50:47.166000 audit: BPF prog-id=242 op=LOAD Jan 20 02:50:47.166000 audit[6184]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffce1b29910 a2=98 a3=0 items=0 ppid=5497 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:47.166000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 02:50:47.166000 audit: BPF prog-id=242 op=UNLOAD Jan 20 02:50:47.166000 audit[6184]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffce1b298e0 a3=0 items=0 ppid=5497 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:47.166000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 02:50:47.167000 audit: BPF prog-id=243 op=LOAD Jan 20 02:50:47.167000 audit[6184]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffce1b29700 a2=94 a3=54428f items=0 ppid=5497 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:47.167000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 02:50:47.167000 audit: BPF prog-id=243 op=UNLOAD Jan 20 02:50:47.167000 audit[6184]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffce1b29700 a2=94 a3=54428f items=0 ppid=5497 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:47.167000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 02:50:47.167000 audit: BPF prog-id=244 op=LOAD Jan 20 02:50:47.167000 audit[6184]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffce1b29730 a2=94 a3=2 items=0 ppid=5497 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:47.167000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 02:50:47.167000 audit: BPF prog-id=244 op=UNLOAD Jan 20 02:50:47.167000 audit[6184]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffce1b29730 a2=0 a3=2 items=0 ppid=5497 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:47.167000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 02:50:47.286454 containerd[1640]: time="2026-01-20T02:50:47.285382236Z" level=info msg="connecting to shim 68bf5dadbd87a719294f61bd0fe3be88ef9de4cf8b71d32ae6fd2afbe8f33a7b" address="unix:///run/containerd/s/61810ddeadb81641958b7ef52207f10da92cb86646ce7b543400b8e0292e5cae" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:50:47.431527 systemd[1]: Started cri-containerd-68bf5dadbd87a719294f61bd0fe3be88ef9de4cf8b71d32ae6fd2afbe8f33a7b.scope - libcontainer container 68bf5dadbd87a719294f61bd0fe3be88ef9de4cf8b71d32ae6fd2afbe8f33a7b. Jan 20 02:50:47.463967 kubelet[2963]: E0120 02:50:47.463246 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:50:47.561000 audit: BPF prog-id=245 op=LOAD Jan 20 02:50:47.564000 audit: BPF prog-id=246 op=LOAD Jan 20 02:50:47.564000 audit[6211]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=6200 pid=6211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:47.564000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638626635646164626438376137313932393466363162643066653362 Jan 20 02:50:47.564000 audit: BPF prog-id=246 op=UNLOAD Jan 20 02:50:47.564000 audit[6211]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=6200 pid=6211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:47.564000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638626635646164626438376137313932393466363162643066653362 Jan 20 02:50:47.566000 audit: BPF prog-id=247 op=LOAD Jan 20 02:50:47.566000 audit[6211]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=6200 pid=6211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:47.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638626635646164626438376137313932393466363162643066653362 Jan 20 02:50:47.566000 audit: BPF prog-id=248 op=LOAD Jan 20 02:50:47.566000 audit[6211]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=6200 pid=6211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:47.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638626635646164626438376137313932393466363162643066653362 Jan 20 02:50:47.572000 audit: BPF prog-id=248 op=UNLOAD Jan 20 02:50:47.572000 audit[6211]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=6200 pid=6211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:47.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638626635646164626438376137313932393466363162643066653362 Jan 20 02:50:47.572000 audit: BPF prog-id=247 op=UNLOAD Jan 20 02:50:47.572000 audit[6211]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=6200 pid=6211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:47.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638626635646164626438376137313932393466363162643066653362 Jan 20 02:50:47.572000 audit: BPF prog-id=249 op=LOAD Jan 20 02:50:47.572000 audit[6211]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=6200 pid=6211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:47.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638626635646164626438376137313932393466363162643066653362 Jan 20 02:50:47.598439 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 02:50:47.727167 kubelet[2963]: E0120 02:50:47.716123 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:47.718000 audit[6232]: NETFILTER_CFG table=filter:125 family=2 entries=14 op=nft_register_rule pid=6232 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:50:47.718000 audit[6232]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd912be9b0 a2=0 a3=7ffd912be99c items=0 ppid=3069 pid=6232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:47.718000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:50:47.758000 audit[6232]: NETFILTER_CFG table=nat:126 family=2 entries=20 op=nft_register_rule pid=6232 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:50:47.758000 audit[6232]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd912be9b0 a2=0 a3=7ffd912be99c items=0 ppid=3069 pid=6232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:47.758000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:50:47.961000 audit: BPF prog-id=250 op=LOAD Jan 20 02:50:47.961000 audit[6184]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffce1b295f0 a2=94 a3=1 items=0 ppid=5497 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:47.961000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 02:50:47.963000 audit: BPF prog-id=250 op=UNLOAD Jan 20 02:50:47.963000 audit[6184]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffce1b295f0 a2=94 a3=1 items=0 ppid=5497 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:47.963000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 02:50:48.034287 systemd-networkd[1544]: cali012089bd691: Gained IPv6LL Jan 20 02:50:48.060000 audit: BPF prog-id=251 op=LOAD Jan 20 02:50:48.060000 audit[6184]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffce1b295e0 a2=94 a3=4 items=0 ppid=5497 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:48.060000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 02:50:48.060000 audit: BPF prog-id=251 op=UNLOAD Jan 20 02:50:48.060000 audit[6184]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffce1b295e0 a2=0 a3=4 items=0 ppid=5497 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:48.060000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 02:50:48.060000 audit: BPF prog-id=252 op=LOAD Jan 20 02:50:48.060000 audit[6184]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffce1b29440 a2=94 a3=5 items=0 ppid=5497 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:48.060000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 02:50:48.060000 audit: BPF prog-id=252 op=UNLOAD Jan 20 02:50:48.060000 audit[6184]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffce1b29440 a2=0 a3=5 items=0 ppid=5497 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:48.060000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 02:50:48.060000 audit: BPF prog-id=253 op=LOAD Jan 20 02:50:48.060000 audit[6184]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffce1b29660 a2=94 a3=6 items=0 ppid=5497 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:48.060000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 02:50:48.060000 audit: BPF prog-id=253 op=UNLOAD Jan 20 02:50:48.060000 audit[6184]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffce1b29660 a2=0 a3=6 items=0 ppid=5497 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:48.060000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 02:50:48.060000 audit: BPF prog-id=254 op=LOAD Jan 20 02:50:48.060000 audit[6184]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffce1b28e10 a2=94 a3=88 items=0 ppid=5497 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:48.060000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 02:50:48.060000 audit: BPF prog-id=255 op=LOAD Jan 20 02:50:48.060000 audit[6184]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffce1b28c90 a2=94 a3=2 items=0 ppid=5497 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:48.060000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 02:50:48.060000 audit: BPF prog-id=255 op=UNLOAD Jan 20 02:50:48.060000 audit[6184]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffce1b28cc0 a2=0 a3=7ffce1b28dc0 items=0 ppid=5497 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:48.060000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 02:50:48.064000 audit: BPF prog-id=254 op=UNLOAD Jan 20 02:50:48.064000 audit[6184]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=291eed10 a2=0 a3=985c78a98b685d23 items=0 ppid=5497 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:48.064000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 02:50:48.170000 audit: BPF prog-id=241 op=UNLOAD Jan 20 02:50:48.170000 audit[5497]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000ae0f80 a2=0 a3=0 items=0 ppid=5486 pid=5497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:48.170000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 20 02:50:48.306440 containerd[1640]: time="2026-01-20T02:50:48.301727414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99b79f8fd-h8mhs,Uid:78de0405-4f44-497e-8007-519223ee3a61,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"68bf5dadbd87a719294f61bd0fe3be88ef9de4cf8b71d32ae6fd2afbe8f33a7b\"" Jan 20 02:50:48.372898 containerd[1640]: time="2026-01-20T02:50:48.370301750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:50:48.495843 kubelet[2963]: E0120 02:50:48.495785 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:50:48.520858 containerd[1640]: time="2026-01-20T02:50:48.518682070Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:50:48.548440 containerd[1640]: time="2026-01-20T02:50:48.548147796Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:50:48.548440 containerd[1640]: time="2026-01-20T02:50:48.548266606Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:50:48.549990 kubelet[2963]: E0120 02:50:48.549187 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:50:48.549990 kubelet[2963]: E0120 02:50:48.549262 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:50:48.549990 kubelet[2963]: E0120 02:50:48.549355 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-99b79f8fd-h8mhs_calico-apiserver(78de0405-4f44-497e-8007-519223ee3a61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:50:48.549990 kubelet[2963]: E0120 02:50:48.549397 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:50:48.734586 systemd-networkd[1544]: vxlan.calico: Gained IPv6LL Jan 20 02:50:48.797000 audit[6266]: NETFILTER_CFG table=nat:127 family=2 entries=15 op=nft_register_chain pid=6266 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 02:50:48.814302 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 20 02:50:48.815051 kernel: audit: type=1325 audit(1768877448.797:727): table=nat:127 family=2 entries=15 op=nft_register_chain pid=6266 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 02:50:48.797000 audit[6266]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffe473316d0 a2=0 a3=7ffe473316bc items=0 ppid=5497 pid=6266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:48.866075 kernel: audit: type=1300 audit(1768877448.797:727): arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffe473316d0 a2=0 a3=7ffe473316bc items=0 ppid=5497 pid=6266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:48.866201 kernel: audit: type=1327 audit(1768877448.797:727): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 02:50:48.797000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 02:50:48.901401 kernel: audit: type=1325 audit(1768877448.824:728): table=mangle:128 family=2 entries=16 op=nft_register_chain pid=6267 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 02:50:48.824000 audit[6267]: NETFILTER_CFG table=mangle:128 family=2 entries=16 op=nft_register_chain pid=6267 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 02:50:48.933151 kernel: audit: type=1300 audit(1768877448.824:728): arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff960183a0 a2=0 a3=7fff9601838c items=0 ppid=5497 pid=6267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:48.824000 audit[6267]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff960183a0 a2=0 a3=7fff9601838c items=0 ppid=5497 pid=6267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:48.824000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 02:50:48.975894 kernel: audit: type=1327 audit(1768877448.824:728): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 02:50:49.036000 audit[6265]: NETFILTER_CFG table=raw:129 family=2 entries=21 op=nft_register_chain pid=6265 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 02:50:49.067931 kernel: audit: type=1325 audit(1768877449.036:729): table=raw:129 family=2 entries=21 op=nft_register_chain pid=6265 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 02:50:49.068071 kernel: audit: type=1300 audit(1768877449.036:729): arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fff50d4e720 a2=0 a3=7fff50d4e70c items=0 ppid=5497 pid=6265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:49.036000 audit[6265]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fff50d4e720 a2=0 a3=7fff50d4e70c items=0 ppid=5497 pid=6265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:49.036000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 02:50:49.149595 kernel: audit: type=1327 audit(1768877449.036:729): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 02:50:49.057000 audit[6269]: NETFILTER_CFG table=filter:130 family=2 entries=298 op=nft_register_chain pid=6269 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 02:50:49.057000 audit[6269]: SYSCALL arch=c000003e syscall=46 success=yes exit=174480 a0=3 a1=7ffe242608a0 a2=0 a3=7ffe2426088c items=0 ppid=5497 pid=6269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:49.057000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 02:50:49.323275 kernel: audit: type=1325 audit(1768877449.057:730): table=filter:130 family=2 entries=298 op=nft_register_chain pid=6269 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 02:50:49.509135 kubelet[2963]: E0120 02:50:49.507283 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:50:49.573000 audit[6280]: NETFILTER_CFG table=filter:131 family=2 entries=41 op=nft_register_chain pid=6280 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 02:50:49.573000 audit[6280]: SYSCALL arch=c000003e syscall=46 success=yes exit=23096 a0=3 a1=7ffe86766020 a2=0 a3=7ffe8676600c items=0 ppid=5497 pid=6280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:49.573000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 02:50:49.755000 audit[6282]: NETFILTER_CFG table=filter:132 family=2 entries=14 op=nft_register_rule pid=6282 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:50:49.755000 audit[6282]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd12ca3c30 a2=0 a3=7ffd12ca3c1c items=0 ppid=3069 pid=6282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:49.755000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:50:49.769000 audit[6282]: NETFILTER_CFG table=nat:133 family=2 entries=20 op=nft_register_rule pid=6282 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:50:49.769000 audit[6282]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd12ca3c30 a2=0 a3=7ffd12ca3c1c items=0 ppid=3069 pid=6282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:50:49.769000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:50:53.741962 containerd[1640]: time="2026-01-20T02:50:53.739545741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 02:50:53.893641 containerd[1640]: time="2026-01-20T02:50:53.892329909Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:50:53.910681 containerd[1640]: time="2026-01-20T02:50:53.909920869Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 02:50:53.910681 containerd[1640]: time="2026-01-20T02:50:53.910075295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 02:50:53.913462 kubelet[2963]: E0120 02:50:53.912697 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 02:50:53.913462 kubelet[2963]: E0120 02:50:53.912800 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 02:50:53.913462 kubelet[2963]: E0120 02:50:53.912932 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-5hks8_calico-system(2048147f-559b-4756-8896-b644ce0ae95e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 02:50:53.913462 kubelet[2963]: E0120 02:50:53.912981 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:50:56.725322 containerd[1640]: time="2026-01-20T02:50:56.725272207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 02:50:56.899343 containerd[1640]: time="2026-01-20T02:50:56.898276183Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:50:56.910002 containerd[1640]: time="2026-01-20T02:50:56.907622124Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 02:50:56.910002 containerd[1640]: time="2026-01-20T02:50:56.909351600Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 02:50:56.911551 kubelet[2963]: E0120 02:50:56.911317 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 02:50:56.911551 kubelet[2963]: E0120 02:50:56.911381 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 02:50:56.917526 kubelet[2963]: E0120 02:50:56.916653 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-554b6967f8-4mv9r_calico-system(9eab50e8-9c7c-4942-9bf1-628e8f6481c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 02:50:56.917526 kubelet[2963]: E0120 02:50:56.917176 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:50:57.716886 kubelet[2963]: E0120 02:50:57.715285 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:57.726085 containerd[1640]: time="2026-01-20T02:50:57.724928339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 02:50:57.862941 containerd[1640]: time="2026-01-20T02:50:57.861782336Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:50:57.887868 containerd[1640]: time="2026-01-20T02:50:57.885112377Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 02:50:57.887868 containerd[1640]: time="2026-01-20T02:50:57.885267085Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 02:50:57.888135 kubelet[2963]: E0120 02:50:57.886154 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 02:50:57.888135 kubelet[2963]: E0120 02:50:57.886291 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 02:50:57.888135 kubelet[2963]: E0120 02:50:57.886580 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-zb7gt_calico-system(2beb3373-3a79-403b-953d-80d6dc35b793): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 02:50:57.912430 containerd[1640]: time="2026-01-20T02:50:57.912049963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 02:50:58.039032 containerd[1640]: time="2026-01-20T02:50:58.032651698Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:50:58.069247 containerd[1640]: time="2026-01-20T02:50:58.056826048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 02:50:58.097440 containerd[1640]: time="2026-01-20T02:50:58.097218568Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 02:50:58.107977 kubelet[2963]: E0120 02:50:58.102794 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 02:50:58.107977 kubelet[2963]: E0120 02:50:58.102859 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 02:50:58.107977 kubelet[2963]: E0120 02:50:58.103593 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-zb7gt_calico-system(2beb3373-3a79-403b-953d-80d6dc35b793): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 02:50:58.107977 kubelet[2963]: E0120 02:50:58.103650 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:50:58.732227 containerd[1640]: time="2026-01-20T02:50:58.727247578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 02:50:58.896342 containerd[1640]: time="2026-01-20T02:50:58.892975020Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:50:58.924014 containerd[1640]: time="2026-01-20T02:50:58.920641317Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 02:50:58.924014 containerd[1640]: time="2026-01-20T02:50:58.920826682Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 02:50:58.928888 kubelet[2963]: E0120 02:50:58.926078 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 02:50:58.928888 kubelet[2963]: E0120 02:50:58.926137 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 02:50:58.928888 kubelet[2963]: E0120 02:50:58.926222 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-68fcfd7799-l9qd2_calico-system(ea0ad3c0-ee09-401c-8807-5b06e8d22025): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 02:50:58.929118 containerd[1640]: time="2026-01-20T02:50:58.927633092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 02:50:59.052073 containerd[1640]: time="2026-01-20T02:50:59.051005264Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:50:59.061346 containerd[1640]: time="2026-01-20T02:50:59.060575023Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 02:50:59.061346 containerd[1640]: time="2026-01-20T02:50:59.060697470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 02:50:59.061647 kubelet[2963]: E0120 02:50:59.060919 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 02:50:59.061647 kubelet[2963]: E0120 02:50:59.060973 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 02:50:59.061647 kubelet[2963]: E0120 02:50:59.061063 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-68fcfd7799-l9qd2_calico-system(ea0ad3c0-ee09-401c-8807-5b06e8d22025): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 02:50:59.061647 kubelet[2963]: E0120 02:50:59.061119 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:51:01.336175 kubelet[2963]: E0120 02:51:01.327873 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:51:02.744702 containerd[1640]: time="2026-01-20T02:51:02.744653120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:51:02.868359 containerd[1640]: time="2026-01-20T02:51:02.857745214Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:51:02.875851 containerd[1640]: time="2026-01-20T02:51:02.875701366Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:51:02.876726 containerd[1640]: time="2026-01-20T02:51:02.876269162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:51:02.880277 kubelet[2963]: E0120 02:51:02.880033 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:51:02.881274 kubelet[2963]: E0120 02:51:02.880431 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:51:02.882133 kubelet[2963]: E0120 02:51:02.881431 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-99b79f8fd-h8mhs_calico-apiserver(78de0405-4f44-497e-8007-519223ee3a61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:51:02.882133 kubelet[2963]: E0120 02:51:02.881586 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:51:03.742547 containerd[1640]: time="2026-01-20T02:51:03.716435213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:51:03.888160 containerd[1640]: time="2026-01-20T02:51:03.887357379Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:51:03.905899 containerd[1640]: time="2026-01-20T02:51:03.905100436Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:51:03.905899 containerd[1640]: time="2026-01-20T02:51:03.905221101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:51:03.906127 kubelet[2963]: E0120 02:51:03.905414 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:51:03.906127 kubelet[2963]: E0120 02:51:03.905466 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:51:03.906127 kubelet[2963]: E0120 02:51:03.905643 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-99b79f8fd-9fwc6_calico-apiserver(67615726-cef8-44da-a26c-7795f613fcbb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:51:03.906127 kubelet[2963]: E0120 02:51:03.905690 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:51:05.736003 kubelet[2963]: E0120 02:51:05.729571 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:51:10.772088 kubelet[2963]: E0120 02:51:10.771941 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:51:11.758984 kubelet[2963]: E0120 02:51:11.757160 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:51:12.738195 kubelet[2963]: E0120 02:51:12.738130 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:51:17.737215 kubelet[2963]: E0120 02:51:17.735037 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:51:18.750316 containerd[1640]: time="2026-01-20T02:51:18.750260586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 02:51:18.762773 kubelet[2963]: E0120 02:51:18.762669 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:51:18.978291 containerd[1640]: time="2026-01-20T02:51:18.977568849Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:51:19.010615 containerd[1640]: time="2026-01-20T02:51:19.006248617Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 02:51:19.010615 containerd[1640]: time="2026-01-20T02:51:19.006437109Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 02:51:19.019966 kubelet[2963]: E0120 02:51:19.007385 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 02:51:19.019966 kubelet[2963]: E0120 02:51:19.007431 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 02:51:19.019966 kubelet[2963]: E0120 02:51:19.013110 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-5hks8_calico-system(2048147f-559b-4756-8896-b644ce0ae95e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 02:51:19.019966 kubelet[2963]: E0120 02:51:19.013334 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:51:23.846428 containerd[1640]: time="2026-01-20T02:51:23.844376463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 02:51:24.024708 containerd[1640]: time="2026-01-20T02:51:24.024360791Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:51:24.177325 containerd[1640]: time="2026-01-20T02:51:24.176294318Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 02:51:24.184564 containerd[1640]: time="2026-01-20T02:51:24.177667170Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 02:51:24.219945 kubelet[2963]: E0120 02:51:24.197259 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 02:51:24.219945 kubelet[2963]: E0120 02:51:24.200599 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 02:51:24.219945 kubelet[2963]: E0120 02:51:24.204126 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-zb7gt_calico-system(2beb3373-3a79-403b-953d-80d6dc35b793): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 02:51:24.231358 containerd[1640]: time="2026-01-20T02:51:24.221066357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 02:51:24.410574 containerd[1640]: time="2026-01-20T02:51:24.406702670Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:51:24.463187 containerd[1640]: time="2026-01-20T02:51:24.456223724Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 02:51:24.463187 containerd[1640]: time="2026-01-20T02:51:24.456351452Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 02:51:24.482884 kubelet[2963]: E0120 02:51:24.482573 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 02:51:24.482884 kubelet[2963]: E0120 02:51:24.482645 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 02:51:24.491520 kubelet[2963]: E0120 02:51:24.483167 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-zb7gt_calico-system(2beb3373-3a79-403b-953d-80d6dc35b793): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 02:51:24.494928 kubelet[2963]: E0120 02:51:24.494541 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:51:24.744424 containerd[1640]: time="2026-01-20T02:51:24.742465704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 02:51:24.941655 containerd[1640]: time="2026-01-20T02:51:24.940659274Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:51:24.961908 containerd[1640]: time="2026-01-20T02:51:24.948753561Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 02:51:24.961908 containerd[1640]: time="2026-01-20T02:51:24.956235149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 02:51:24.965817 kubelet[2963]: E0120 02:51:24.963179 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 02:51:24.965817 kubelet[2963]: E0120 02:51:24.963240 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 02:51:24.965817 kubelet[2963]: E0120 02:51:24.963526 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-68fcfd7799-l9qd2_calico-system(ea0ad3c0-ee09-401c-8807-5b06e8d22025): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 02:51:24.969258 containerd[1640]: time="2026-01-20T02:51:24.968869225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 02:51:25.125437 containerd[1640]: time="2026-01-20T02:51:25.125379636Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:51:25.134779 containerd[1640]: time="2026-01-20T02:51:25.134230980Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 02:51:25.134779 containerd[1640]: time="2026-01-20T02:51:25.134340273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 02:51:25.134779 containerd[1640]: time="2026-01-20T02:51:25.140680860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 02:51:25.160872 kubelet[2963]: E0120 02:51:25.137392 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 02:51:25.160872 kubelet[2963]: E0120 02:51:25.137440 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 02:51:25.160872 kubelet[2963]: E0120 02:51:25.139152 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-554b6967f8-4mv9r_calico-system(9eab50e8-9c7c-4942-9bf1-628e8f6481c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 02:51:25.160872 kubelet[2963]: E0120 02:51:25.139207 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:51:25.295869 containerd[1640]: time="2026-01-20T02:51:25.289466921Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:51:25.303945 containerd[1640]: time="2026-01-20T02:51:25.303409649Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 02:51:25.303945 containerd[1640]: time="2026-01-20T02:51:25.303585346Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 02:51:25.306001 kubelet[2963]: E0120 02:51:25.305312 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 02:51:25.306001 kubelet[2963]: E0120 02:51:25.305375 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 02:51:25.306001 kubelet[2963]: E0120 02:51:25.305554 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-68fcfd7799-l9qd2_calico-system(ea0ad3c0-ee09-401c-8807-5b06e8d22025): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 02:51:25.306001 kubelet[2963]: E0120 02:51:25.305610 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:51:27.784359 containerd[1640]: time="2026-01-20T02:51:27.783014131Z" level=info msg="container event discarded" container=1bbf335a2d8fc1b04c520da65e5e107490fe4f07918d46bdbf8ab1e559d6f0e4 type=CONTAINER_CREATED_EVENT Jan 20 02:51:27.784359 containerd[1640]: time="2026-01-20T02:51:27.784308357Z" level=info msg="container event discarded" container=1bbf335a2d8fc1b04c520da65e5e107490fe4f07918d46bdbf8ab1e559d6f0e4 type=CONTAINER_STARTED_EVENT Jan 20 02:51:27.895614 containerd[1640]: time="2026-01-20T02:51:27.895548994Z" level=info msg="container event discarded" container=c321302b230591fe92f1c95b2454afa8782776108ee5c461fe97a49eff68de3a type=CONTAINER_CREATED_EVENT Jan 20 02:51:27.895896 containerd[1640]: time="2026-01-20T02:51:27.895864130Z" level=info msg="container event discarded" container=c321302b230591fe92f1c95b2454afa8782776108ee5c461fe97a49eff68de3a type=CONTAINER_STARTED_EVENT Jan 20 02:51:28.614144 containerd[1640]: time="2026-01-20T02:51:28.613956146Z" level=info msg="container event discarded" container=09236e09df2dbc6a03088a784da43990013854b6d9b698358b6653ef25e2fd9e type=CONTAINER_CREATED_EVENT Jan 20 02:51:28.614981 containerd[1640]: time="2026-01-20T02:51:28.614906762Z" level=info msg="container event discarded" container=09236e09df2dbc6a03088a784da43990013854b6d9b698358b6653ef25e2fd9e type=CONTAINER_STARTED_EVENT Jan 20 02:51:28.688722 containerd[1640]: time="2026-01-20T02:51:28.688598898Z" level=info msg="container event discarded" container=111a0596ea36b9178a4997101411d8c780578559392bd6a8632c80926df747b1 type=CONTAINER_CREATED_EVENT Jan 20 02:51:28.765022 containerd[1640]: time="2026-01-20T02:51:28.733758439Z" level=info msg="container event discarded" container=c6193d26710dc038cf5a789ab227302418e59b3133dc3daf1ff495a39f4c497e type=CONTAINER_CREATED_EVENT Jan 20 02:51:29.277338 containerd[1640]: time="2026-01-20T02:51:29.256452252Z" level=info msg="container event discarded" container=0aa4bbfb1ec76e795c0b38aed05edbae55576a2573da9ca4786d7f4e0e6de8cd type=CONTAINER_CREATED_EVENT Jan 20 02:51:29.808055 containerd[1640]: time="2026-01-20T02:51:29.805543846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:51:30.041328 containerd[1640]: time="2026-01-20T02:51:30.041195304Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:51:30.093068 containerd[1640]: time="2026-01-20T02:51:30.086960275Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:51:30.216966 containerd[1640]: time="2026-01-20T02:51:30.214578488Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:51:30.245138 kubelet[2963]: E0120 02:51:30.244578 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:51:30.245138 kubelet[2963]: E0120 02:51:30.244694 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:51:30.245138 kubelet[2963]: E0120 02:51:30.245003 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-99b79f8fd-h8mhs_calico-apiserver(78de0405-4f44-497e-8007-519223ee3a61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:51:30.334438 kubelet[2963]: E0120 02:51:30.334183 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:51:31.789261 containerd[1640]: time="2026-01-20T02:51:31.783593371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:51:32.082018 kubelet[2963]: E0120 02:51:32.081239 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:51:32.176018 containerd[1640]: time="2026-01-20T02:51:32.170382885Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:51:32.176018 containerd[1640]: time="2026-01-20T02:51:32.172130684Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:51:32.176018 containerd[1640]: time="2026-01-20T02:51:32.172221171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:51:32.181706 kubelet[2963]: E0120 02:51:32.179897 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:51:32.181706 kubelet[2963]: E0120 02:51:32.179947 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:51:32.181706 kubelet[2963]: E0120 02:51:32.180044 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-99b79f8fd-9fwc6_calico-apiserver(67615726-cef8-44da-a26c-7795f613fcbb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:51:32.181706 kubelet[2963]: E0120 02:51:32.180084 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:51:33.413093 containerd[1640]: time="2026-01-20T02:51:33.412779575Z" level=info msg="container event discarded" container=111a0596ea36b9178a4997101411d8c780578559392bd6a8632c80926df747b1 type=CONTAINER_STARTED_EVENT Jan 20 02:51:33.492177 containerd[1640]: time="2026-01-20T02:51:33.491406811Z" level=info msg="container event discarded" container=0aa4bbfb1ec76e795c0b38aed05edbae55576a2573da9ca4786d7f4e0e6de8cd type=CONTAINER_STARTED_EVENT Jan 20 02:51:33.843133 containerd[1640]: time="2026-01-20T02:51:33.833381467Z" level=info msg="container event discarded" container=c6193d26710dc038cf5a789ab227302418e59b3133dc3daf1ff495a39f4c497e type=CONTAINER_STARTED_EVENT Jan 20 02:51:37.739315 kubelet[2963]: E0120 02:51:37.732249 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:51:38.747468 kubelet[2963]: E0120 02:51:38.743602 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:51:39.802279 kubelet[2963]: E0120 02:51:39.791057 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:51:44.727228 kubelet[2963]: E0120 02:51:44.727057 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:51:45.755024 kubelet[2963]: E0120 02:51:45.750789 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:51:46.730211 kubelet[2963]: E0120 02:51:46.726879 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:51:47.735559 kubelet[2963]: E0120 02:51:47.735130 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:51:49.732198 kubelet[2963]: E0120 02:51:49.727733 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:51:50.722453 kubelet[2963]: E0120 02:51:50.722322 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:51:51.731169 kubelet[2963]: E0120 02:51:51.728197 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:51:51.775560 kubelet[2963]: E0120 02:51:51.758694 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:51:53.734179 kubelet[2963]: E0120 02:51:53.733993 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:51:55.724535 kubelet[2963]: E0120 02:51:55.724166 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:51:55.755026 kubelet[2963]: E0120 02:51:55.754917 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:51:57.752616 kubelet[2963]: E0120 02:51:57.752347 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:51:57.769149 kubelet[2963]: E0120 02:51:57.765320 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:51:58.731064 kubelet[2963]: E0120 02:51:58.730662 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:52:01.744147 kubelet[2963]: E0120 02:52:01.737797 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:52:03.726381 kubelet[2963]: E0120 02:52:03.725005 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:52:04.737547 containerd[1640]: time="2026-01-20T02:52:04.737184698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 02:52:04.878885 containerd[1640]: time="2026-01-20T02:52:04.878760297Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:52:04.894297 containerd[1640]: time="2026-01-20T02:52:04.894234363Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 02:52:04.894664 containerd[1640]: time="2026-01-20T02:52:04.894550340Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 02:52:04.895946 kubelet[2963]: E0120 02:52:04.894983 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 02:52:04.895946 kubelet[2963]: E0120 02:52:04.895036 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 02:52:04.895946 kubelet[2963]: E0120 02:52:04.895121 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-zb7gt_calico-system(2beb3373-3a79-403b-953d-80d6dc35b793): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 02:52:04.915001 containerd[1640]: time="2026-01-20T02:52:04.897149388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 02:52:05.121080 containerd[1640]: time="2026-01-20T02:52:05.120780226Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:52:05.123812 containerd[1640]: time="2026-01-20T02:52:05.123757247Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 02:52:05.124233 containerd[1640]: time="2026-01-20T02:52:05.124206751Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 02:52:05.128206 kubelet[2963]: E0120 02:52:05.128041 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 02:52:05.128556 kubelet[2963]: E0120 02:52:05.128338 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 02:52:05.129060 kubelet[2963]: E0120 02:52:05.128795 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-zb7gt_calico-system(2beb3373-3a79-403b-953d-80d6dc35b793): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 02:52:05.129348 kubelet[2963]: E0120 02:52:05.129164 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:52:05.695661 containerd[1640]: time="2026-01-20T02:52:05.694338467Z" level=info msg="container event discarded" container=c5c95fcc76eb8039275040d68fd1614a401594a24cfd6060a8cfb556eaa29a8d type=CONTAINER_CREATED_EVENT Jan 20 02:52:05.695661 containerd[1640]: time="2026-01-20T02:52:05.694614831Z" level=info msg="container event discarded" container=c5c95fcc76eb8039275040d68fd1614a401594a24cfd6060a8cfb556eaa29a8d type=CONTAINER_STARTED_EVENT Jan 20 02:52:06.039967 containerd[1640]: time="2026-01-20T02:52:05.990430158Z" level=info msg="container event discarded" container=d29426439cde74651a2fba305498616ef2e55deff96ffce2d4987e099ad33031 type=CONTAINER_CREATED_EVENT Jan 20 02:52:06.737627 containerd[1640]: time="2026-01-20T02:52:06.736792194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 02:52:06.740563 kubelet[2963]: E0120 02:52:06.734822 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:52:06.821962 containerd[1640]: time="2026-01-20T02:52:06.821621273Z" level=info msg="container event discarded" container=d29426439cde74651a2fba305498616ef2e55deff96ffce2d4987e099ad33031 type=CONTAINER_STARTED_EVENT Jan 20 02:52:06.863945 containerd[1640]: time="2026-01-20T02:52:06.863749671Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:52:06.879555 containerd[1640]: time="2026-01-20T02:52:06.877252004Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 02:52:06.879555 containerd[1640]: time="2026-01-20T02:52:06.877397904Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 02:52:06.882772 kubelet[2963]: E0120 02:52:06.882587 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 02:52:06.895215 kubelet[2963]: E0120 02:52:06.882991 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 02:52:06.895215 kubelet[2963]: E0120 02:52:06.883400 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-68fcfd7799-l9qd2_calico-system(ea0ad3c0-ee09-401c-8807-5b06e8d22025): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 02:52:06.903281 containerd[1640]: time="2026-01-20T02:52:06.902575364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 02:52:07.083036 containerd[1640]: time="2026-01-20T02:52:07.075601488Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:52:07.136028 containerd[1640]: time="2026-01-20T02:52:07.129617030Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 02:52:07.136028 containerd[1640]: time="2026-01-20T02:52:07.129729639Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 02:52:07.136268 kubelet[2963]: E0120 02:52:07.130400 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 02:52:07.136268 kubelet[2963]: E0120 02:52:07.130594 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 02:52:07.136268 kubelet[2963]: E0120 02:52:07.131416 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-68fcfd7799-l9qd2_calico-system(ea0ad3c0-ee09-401c-8807-5b06e8d22025): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 02:52:07.136268 kubelet[2963]: E0120 02:52:07.131586 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:52:08.060742 containerd[1640]: time="2026-01-20T02:52:08.060642267Z" level=info msg="container event discarded" container=7ad8c8d83c5a5c008027c4ec1ed414cde3bcf3d2da3e9b92cfa1f01674939ed8 type=CONTAINER_CREATED_EVENT Jan 20 02:52:08.060742 containerd[1640]: time="2026-01-20T02:52:08.060702038Z" level=info msg="container event discarded" container=7ad8c8d83c5a5c008027c4ec1ed414cde3bcf3d2da3e9b92cfa1f01674939ed8 type=CONTAINER_STARTED_EVENT Jan 20 02:52:12.728047 containerd[1640]: time="2026-01-20T02:52:12.726396197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:52:12.878640 containerd[1640]: time="2026-01-20T02:52:12.878169450Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:52:12.893918 containerd[1640]: time="2026-01-20T02:52:12.893606965Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:52:12.894167 containerd[1640]: time="2026-01-20T02:52:12.893734211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:52:12.895745 kubelet[2963]: E0120 02:52:12.895599 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:52:12.912005 kubelet[2963]: E0120 02:52:12.905236 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:52:12.912005 kubelet[2963]: E0120 02:52:12.905605 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-99b79f8fd-9fwc6_calico-apiserver(67615726-cef8-44da-a26c-7795f613fcbb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:52:12.912005 kubelet[2963]: E0120 02:52:12.905654 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:52:12.912142 containerd[1640]: time="2026-01-20T02:52:12.908625440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 02:52:13.028355 containerd[1640]: time="2026-01-20T02:52:13.027361117Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:52:13.071783 containerd[1640]: time="2026-01-20T02:52:13.071632217Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 02:52:13.085546 containerd[1640]: time="2026-01-20T02:52:13.072697876Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 02:52:13.086295 kubelet[2963]: E0120 02:52:13.085999 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 02:52:13.086295 kubelet[2963]: E0120 02:52:13.086074 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 02:52:13.086295 kubelet[2963]: E0120 02:52:13.086173 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-554b6967f8-4mv9r_calico-system(9eab50e8-9c7c-4942-9bf1-628e8f6481c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 02:52:13.086295 kubelet[2963]: E0120 02:52:13.086219 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:52:13.763051 containerd[1640]: time="2026-01-20T02:52:13.762999715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 02:52:13.994582 containerd[1640]: time="2026-01-20T02:52:13.988422074Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:52:14.003933 containerd[1640]: time="2026-01-20T02:52:14.002752072Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 02:52:14.003933 containerd[1640]: time="2026-01-20T02:52:14.003010621Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 02:52:14.004149 kubelet[2963]: E0120 02:52:14.003342 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 02:52:14.010039 kubelet[2963]: E0120 02:52:14.009784 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 02:52:14.011167 kubelet[2963]: E0120 02:52:14.011041 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-5hks8_calico-system(2048147f-559b-4756-8896-b644ce0ae95e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 02:52:14.011691 kubelet[2963]: E0120 02:52:14.011468 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:52:15.744568 kubelet[2963]: E0120 02:52:15.743984 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:52:17.763408 kubelet[2963]: E0120 02:52:17.762808 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:52:19.388670 containerd[1640]: time="2026-01-20T02:52:19.388395073Z" level=info msg="container event discarded" container=9452fb7f2424b07d171c13b7de7c83ab9e9a52d46b3e5e336dc8375081d9b656 type=CONTAINER_CREATED_EVENT Jan 20 02:52:19.974812 containerd[1640]: time="2026-01-20T02:52:19.973962603Z" level=info msg="container event discarded" container=9452fb7f2424b07d171c13b7de7c83ab9e9a52d46b3e5e336dc8375081d9b656 type=CONTAINER_STARTED_EVENT Jan 20 02:52:20.730810 kubelet[2963]: E0120 02:52:20.727257 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:52:21.735071 kubelet[2963]: E0120 02:52:21.723612 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:52:21.738228 containerd[1640]: time="2026-01-20T02:52:21.730708687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:52:21.844447 containerd[1640]: time="2026-01-20T02:52:21.844082828Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:52:21.848801 containerd[1640]: time="2026-01-20T02:52:21.848131817Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:52:21.848801 containerd[1640]: time="2026-01-20T02:52:21.848242663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:52:21.856926 kubelet[2963]: E0120 02:52:21.849631 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:52:21.856926 kubelet[2963]: E0120 02:52:21.849700 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:52:21.857180 kubelet[2963]: E0120 02:52:21.849801 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-99b79f8fd-h8mhs_calico-apiserver(78de0405-4f44-497e-8007-519223ee3a61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:52:21.857461 kubelet[2963]: E0120 02:52:21.857424 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:52:23.771635 kubelet[2963]: E0120 02:52:23.771566 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:52:25.736425 kubelet[2963]: E0120 02:52:25.726984 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:52:27.740870 kubelet[2963]: E0120 02:52:27.735546 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:52:28.731728 kubelet[2963]: E0120 02:52:28.731219 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:52:33.725727 kubelet[2963]: E0120 02:52:33.725575 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:52:34.722259 kubelet[2963]: E0120 02:52:34.721587 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:52:36.759968 kubelet[2963]: E0120 02:52:36.759411 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:52:39.775145 kubelet[2963]: E0120 02:52:39.775022 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:52:40.724264 kubelet[2963]: E0120 02:52:40.723172 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:52:41.929083 kubelet[2963]: E0120 02:52:41.914463 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:52:47.845699 kubelet[2963]: E0120 02:52:47.823655 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:52:47.845699 kubelet[2963]: E0120 02:52:47.845036 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:52:48.743608 kubelet[2963]: E0120 02:52:48.737316 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:52:58.365013 kubelet[2963]: E0120 02:52:58.359571 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:52:58.365013 kubelet[2963]: E0120 02:52:58.363663 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:52:58.478571 kubelet[2963]: E0120 02:52:58.464605 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:52:59.789822 kubelet[2963]: E0120 02:52:59.785560 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:52:59.789822 kubelet[2963]: E0120 02:52:59.789350 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:53:01.793229 kubelet[2963]: E0120 02:53:01.793123 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:53:01.816186 kubelet[2963]: E0120 02:53:01.816103 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:53:03.808778 kubelet[2963]: E0120 02:53:03.808446 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:53:04.721567 kubelet[2963]: E0120 02:53:04.720914 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:53:06.760014 kubelet[2963]: E0120 02:53:06.729837 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:53:09.726401 kubelet[2963]: E0120 02:53:09.724164 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:53:10.762830 kubelet[2963]: E0120 02:53:10.756427 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:53:12.747659 kubelet[2963]: E0120 02:53:12.741015 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:53:12.747659 kubelet[2963]: E0120 02:53:12.742098 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:53:12.764876 kubelet[2963]: E0120 02:53:12.759013 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:53:14.728007 kubelet[2963]: E0120 02:53:14.727944 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:53:15.140643 containerd[1640]: time="2026-01-20T02:53:15.131106813Z" level=info msg="container event discarded" container=fe22b359ac4f72c1a466351f8b644dda9b776debaec609a0d098fc799125cb73 type=CONTAINER_CREATED_EVENT Jan 20 02:53:15.140643 containerd[1640]: time="2026-01-20T02:53:15.131208832Z" level=info msg="container event discarded" container=fe22b359ac4f72c1a466351f8b644dda9b776debaec609a0d098fc799125cb73 type=CONTAINER_STARTED_EVENT Jan 20 02:53:15.140643 containerd[1640]: time="2026-01-20T02:53:15.131230473Z" level=info msg="container event discarded" container=9b11c0bb3f90598896829d6bcc1ca00a8def946677b2d9231b5bc29e2d25ee37 type=CONTAINER_CREATED_EVENT Jan 20 02:53:15.140643 containerd[1640]: time="2026-01-20T02:53:15.131243397Z" level=info msg="container event discarded" container=9b11c0bb3f90598896829d6bcc1ca00a8def946677b2d9231b5bc29e2d25ee37 type=CONTAINER_STARTED_EVENT Jan 20 02:53:18.715132 kubelet[2963]: E0120 02:53:18.715015 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:53:19.727010 kubelet[2963]: E0120 02:53:19.714599 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:53:22.725175 kubelet[2963]: E0120 02:53:22.721458 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:53:22.725175 kubelet[2963]: E0120 02:53:22.724140 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:53:22.738834 kubelet[2963]: E0120 02:53:22.735571 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:53:24.087120 containerd[1640]: time="2026-01-20T02:53:24.087021185Z" level=info msg="container event discarded" container=415be170bb09c264e0e58f280bc9402c849ac52d9afdc75400a8ea0f8f72cee4 type=CONTAINER_CREATED_EVENT Jan 20 02:53:25.016404 containerd[1640]: time="2026-01-20T02:53:25.010129126Z" level=info msg="container event discarded" container=415be170bb09c264e0e58f280bc9402c849ac52d9afdc75400a8ea0f8f72cee4 type=CONTAINER_STARTED_EVENT Jan 20 02:53:25.738226 kubelet[2963]: E0120 02:53:25.738148 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:53:26.741208 kubelet[2963]: E0120 02:53:26.741100 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:53:27.666841 containerd[1640]: time="2026-01-20T02:53:27.663236765Z" level=info msg="container event discarded" container=f8c9062d394a6ff78a76ac4900d8f660fda6f23fdf2cb8a72abba830f50242cb type=CONTAINER_CREATED_EVENT Jan 20 02:53:27.730799 containerd[1640]: time="2026-01-20T02:53:27.725094888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 02:53:27.917934 containerd[1640]: time="2026-01-20T02:53:27.914281292Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:53:27.932320 containerd[1640]: time="2026-01-20T02:53:27.932226299Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 02:53:27.932593 containerd[1640]: time="2026-01-20T02:53:27.932243859Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 02:53:27.942986 kubelet[2963]: E0120 02:53:27.935439 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 02:53:27.942986 kubelet[2963]: E0120 02:53:27.935581 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 02:53:27.942986 kubelet[2963]: E0120 02:53:27.941158 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-68fcfd7799-l9qd2_calico-system(ea0ad3c0-ee09-401c-8807-5b06e8d22025): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 02:53:27.986756 containerd[1640]: time="2026-01-20T02:53:27.986707315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 02:53:28.122876 containerd[1640]: time="2026-01-20T02:53:28.122773110Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:53:28.171215 containerd[1640]: time="2026-01-20T02:53:28.171053721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 02:53:28.174012 containerd[1640]: time="2026-01-20T02:53:28.173739966Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 02:53:28.180359 kubelet[2963]: E0120 02:53:28.174874 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 02:53:28.192815 kubelet[2963]: E0120 02:53:28.187586 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 02:53:28.192815 kubelet[2963]: E0120 02:53:28.190741 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-68fcfd7799-l9qd2_calico-system(ea0ad3c0-ee09-401c-8807-5b06e8d22025): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 02:53:28.192815 kubelet[2963]: E0120 02:53:28.190982 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:53:29.114543 containerd[1640]: time="2026-01-20T02:53:29.114159041Z" level=info msg="container event discarded" container=f8c9062d394a6ff78a76ac4900d8f660fda6f23fdf2cb8a72abba830f50242cb type=CONTAINER_STARTED_EVENT Jan 20 02:53:29.755158 kubelet[2963]: E0120 02:53:29.742365 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:53:30.254722 containerd[1640]: time="2026-01-20T02:53:30.245823110Z" level=info msg="container event discarded" container=f8c9062d394a6ff78a76ac4900d8f660fda6f23fdf2cb8a72abba830f50242cb type=CONTAINER_STOPPED_EVENT Jan 20 02:53:33.730015 kubelet[2963]: E0120 02:53:33.728354 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:53:36.770839 containerd[1640]: time="2026-01-20T02:53:36.769927568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 02:53:36.923962 containerd[1640]: time="2026-01-20T02:53:36.923122627Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:53:36.938129 containerd[1640]: time="2026-01-20T02:53:36.937676838Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 02:53:36.938129 containerd[1640]: time="2026-01-20T02:53:36.937835111Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 02:53:36.942998 kubelet[2963]: E0120 02:53:36.942891 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 02:53:36.943945 kubelet[2963]: E0120 02:53:36.943207 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 02:53:36.944902 kubelet[2963]: E0120 02:53:36.944634 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-zb7gt_calico-system(2beb3373-3a79-403b-953d-80d6dc35b793): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 02:53:36.956733 containerd[1640]: time="2026-01-20T02:53:36.956238587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 02:53:37.083404 containerd[1640]: time="2026-01-20T02:53:37.083068775Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:53:37.112631 containerd[1640]: time="2026-01-20T02:53:37.111578999Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 02:53:37.112631 containerd[1640]: time="2026-01-20T02:53:37.111800519Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 02:53:37.124107 kubelet[2963]: E0120 02:53:37.114652 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 02:53:37.124107 kubelet[2963]: E0120 02:53:37.114739 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 02:53:37.124107 kubelet[2963]: E0120 02:53:37.114832 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-zb7gt_calico-system(2beb3373-3a79-403b-953d-80d6dc35b793): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 02:53:37.124107 kubelet[2963]: E0120 02:53:37.114886 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:53:39.739338 kubelet[2963]: E0120 02:53:39.738610 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:53:40.782625 kubelet[2963]: E0120 02:53:40.769843 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:53:40.783188 containerd[1640]: time="2026-01-20T02:53:40.776114854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 02:53:41.002853 containerd[1640]: time="2026-01-20T02:53:40.992616361Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:53:41.023820 containerd[1640]: time="2026-01-20T02:53:41.017755967Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 02:53:41.023820 containerd[1640]: time="2026-01-20T02:53:41.017916314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 02:53:41.024075 kubelet[2963]: E0120 02:53:41.018224 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 02:53:41.024075 kubelet[2963]: E0120 02:53:41.018287 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 02:53:41.024075 kubelet[2963]: E0120 02:53:41.018397 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-554b6967f8-4mv9r_calico-system(9eab50e8-9c7c-4942-9bf1-628e8f6481c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 02:53:41.024075 kubelet[2963]: E0120 02:53:41.018441 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:53:41.792885 kubelet[2963]: E0120 02:53:41.770313 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:53:44.757835 containerd[1640]: time="2026-01-20T02:53:44.740464295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:53:44.889593 containerd[1640]: time="2026-01-20T02:53:44.889275855Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:53:44.904774 containerd[1640]: time="2026-01-20T02:53:44.902266313Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:53:44.904774 containerd[1640]: time="2026-01-20T02:53:44.902328009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:53:44.904774 containerd[1640]: time="2026-01-20T02:53:44.903420566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 02:53:44.905043 kubelet[2963]: E0120 02:53:44.902866 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:53:44.905043 kubelet[2963]: E0120 02:53:44.902925 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:53:44.905043 kubelet[2963]: E0120 02:53:44.903137 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-99b79f8fd-9fwc6_calico-apiserver(67615726-cef8-44da-a26c-7795f613fcbb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:53:44.905043 kubelet[2963]: E0120 02:53:44.903183 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:53:45.047911 containerd[1640]: time="2026-01-20T02:53:45.045134294Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:53:45.068150 containerd[1640]: time="2026-01-20T02:53:45.066394890Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 02:53:45.068150 containerd[1640]: time="2026-01-20T02:53:45.066632470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 02:53:45.068395 kubelet[2963]: E0120 02:53:45.067143 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 02:53:45.068395 kubelet[2963]: E0120 02:53:45.067198 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 02:53:45.068395 kubelet[2963]: E0120 02:53:45.067283 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-5hks8_calico-system(2048147f-559b-4756-8896-b644ce0ae95e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 02:53:45.068395 kubelet[2963]: E0120 02:53:45.067324 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:53:49.737162 kubelet[2963]: E0120 02:53:49.736806 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:53:54.788304 containerd[1640]: time="2026-01-20T02:53:54.771062647Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:53:54.947331 containerd[1640]: time="2026-01-20T02:53:54.946809476Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:53:54.965074 containerd[1640]: time="2026-01-20T02:53:54.964996838Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:53:54.965074 containerd[1640]: time="2026-01-20T02:53:54.965121599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:53:54.965947 kubelet[2963]: E0120 02:53:54.965339 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:53:54.965947 kubelet[2963]: E0120 02:53:54.965401 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:53:54.965947 kubelet[2963]: E0120 02:53:54.965587 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-99b79f8fd-h8mhs_calico-apiserver(78de0405-4f44-497e-8007-519223ee3a61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:53:54.965947 kubelet[2963]: E0120 02:53:54.965633 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:53:55.740408 kubelet[2963]: E0120 02:53:55.740229 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:53:55.780576 kubelet[2963]: E0120 02:53:55.780344 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:53:56.723946 kubelet[2963]: E0120 02:53:56.721949 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:53:57.739168 kubelet[2963]: E0120 02:53:57.737097 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:53:58.738065 containerd[1640]: time="2026-01-20T02:53:58.733791147Z" level=info msg="container event discarded" container=4653c12fe760f0ddabdfbacae2ccaf1e28b72c8eee095561040a0e5008e3ff76 type=CONTAINER_CREATED_EVENT Jan 20 02:54:00.389336 containerd[1640]: time="2026-01-20T02:54:00.389224085Z" level=info msg="container event discarded" container=4653c12fe760f0ddabdfbacae2ccaf1e28b72c8eee095561040a0e5008e3ff76 type=CONTAINER_STARTED_EVENT Jan 20 02:54:00.742828 kubelet[2963]: E0120 02:54:00.742189 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:54:04.720718 kubelet[2963]: E0120 02:54:04.717184 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:54:08.492201 containerd[1640]: time="2026-01-20T02:54:08.492111678Z" level=info msg="container event discarded" container=4653c12fe760f0ddabdfbacae2ccaf1e28b72c8eee095561040a0e5008e3ff76 type=CONTAINER_STOPPED_EVENT Jan 20 02:54:08.744035 kubelet[2963]: E0120 02:54:08.739148 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:54:08.744035 kubelet[2963]: E0120 02:54:08.741189 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:54:08.744035 kubelet[2963]: E0120 02:54:08.741319 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:54:10.740273 kubelet[2963]: E0120 02:54:10.734428 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:54:10.780643 kubelet[2963]: E0120 02:54:10.772844 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:54:13.757629 kubelet[2963]: E0120 02:54:13.751659 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:54:14.733894 kubelet[2963]: E0120 02:54:14.728256 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:54:16.711793 systemd[1]: Started sshd@7-10.0.0.129:22-10.0.0.1:44936.service - OpenSSH per-connection server daemon (10.0.0.1:44936). Jan 20 02:54:16.738743 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 20 02:54:16.738919 kernel: audit: type=1130 audit(1768877656.715:734): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.129:22-10.0.0.1:44936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:54:16.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.129:22-10.0.0.1:44936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:54:17.742245 kubelet[2963]: E0120 02:54:17.741600 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:54:17.980000 audit[6596]: USER_ACCT pid=6596 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:18.003979 sshd[6596]: Accepted publickey for core from 10.0.0.1 port 44936 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:54:18.026157 sshd-session[6596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:54:17.998000 audit[6596]: CRED_ACQ pid=6596 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:18.097916 kernel: audit: type=1101 audit(1768877657.980:735): pid=6596 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:18.098218 kernel: audit: type=1103 audit(1768877657.998:736): pid=6596 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:18.098334 kernel: audit: type=1006 audit(1768877657.998:737): pid=6596 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Jan 20 02:54:18.112721 systemd-logind[1612]: New session 8 of user core. Jan 20 02:54:17.998000 audit[6596]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7603d0e0 a2=3 a3=0 items=0 ppid=1 pid=6596 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:54:18.233194 kernel: audit: type=1300 audit(1768877657.998:737): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7603d0e0 a2=3 a3=0 items=0 ppid=1 pid=6596 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:54:18.235078 kernel: audit: type=1327 audit(1768877657.998:737): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:54:17.998000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:54:18.349068 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 02:54:18.444000 audit[6596]: USER_START pid=6596 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:18.521892 kernel: audit: type=1105 audit(1768877658.444:738): pid=6596 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:18.471000 audit[6599]: CRED_ACQ pid=6599 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:18.590166 kernel: audit: type=1103 audit(1768877658.471:739): pid=6599 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:18.723227 kubelet[2963]: E0120 02:54:18.722230 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:54:19.664676 sshd[6599]: Connection closed by 10.0.0.1 port 44936 Jan 20 02:54:19.683154 sshd-session[6596]: pam_unix(sshd:session): session closed for user core Jan 20 02:54:19.698000 audit[6596]: USER_END pid=6596 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:19.714353 systemd[1]: sshd@7-10.0.0.129:22-10.0.0.1:44936.service: Deactivated successfully. Jan 20 02:54:19.718143 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 02:54:19.725654 systemd-logind[1612]: Session 8 logged out. Waiting for processes to exit. Jan 20 02:54:19.727614 systemd-logind[1612]: Removed session 8. Jan 20 02:54:19.746312 kernel: audit: type=1106 audit(1768877659.698:740): pid=6596 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:19.700000 audit[6596]: CRED_DISP pid=6596 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:19.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.129:22-10.0.0.1:44936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:54:19.830406 kernel: audit: type=1104 audit(1768877659.700:741): pid=6596 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:20.728595 kubelet[2963]: E0120 02:54:20.717309 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:54:22.772385 kubelet[2963]: E0120 02:54:22.768432 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:54:23.725648 kubelet[2963]: E0120 02:54:23.723247 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:54:23.792863 kubelet[2963]: E0120 02:54:23.785356 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:54:24.734675 kubelet[2963]: E0120 02:54:24.731405 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:54:24.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.129:22-10.0.0.1:44074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:54:24.809748 systemd[1]: Started sshd@8-10.0.0.129:22-10.0.0.1:44074.service - OpenSSH per-connection server daemon (10.0.0.1:44074). Jan 20 02:54:24.832554 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:54:24.832705 kernel: audit: type=1130 audit(1768877664.809:743): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.129:22-10.0.0.1:44074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:54:25.227000 audit[6620]: USER_ACCT pid=6620 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:25.238203 sshd[6620]: Accepted publickey for core from 10.0.0.1 port 44074 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:54:25.244823 sshd-session[6620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:54:25.294737 kernel: audit: type=1101 audit(1768877665.227:744): pid=6620 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:25.294864 kernel: audit: type=1103 audit(1768877665.227:745): pid=6620 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:25.227000 audit[6620]: CRED_ACQ pid=6620 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:25.313696 systemd-logind[1612]: New session 9 of user core. Jan 20 02:54:25.383730 kernel: audit: type=1006 audit(1768877665.227:746): pid=6620 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 20 02:54:25.383839 kernel: audit: type=1300 audit(1768877665.227:746): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff17c37940 a2=3 a3=0 items=0 ppid=1 pid=6620 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:54:25.227000 audit[6620]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff17c37940 a2=3 a3=0 items=0 ppid=1 pid=6620 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:54:25.227000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:54:25.463605 kernel: audit: type=1327 audit(1768877665.227:746): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:54:25.468953 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 02:54:25.490000 audit[6620]: USER_START pid=6620 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:25.536843 kernel: audit: type=1105 audit(1768877665.490:747): pid=6620 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:25.507000 audit[6623]: CRED_ACQ pid=6623 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:25.598936 kernel: audit: type=1103 audit(1768877665.507:748): pid=6623 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:25.717638 kubelet[2963]: E0120 02:54:25.717074 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:54:25.720386 kubelet[2963]: E0120 02:54:25.720278 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:54:26.338329 sshd[6623]: Connection closed by 10.0.0.1 port 44074 Jan 20 02:54:26.362039 sshd-session[6620]: pam_unix(sshd:session): session closed for user core Jan 20 02:54:26.352000 audit[6620]: USER_END pid=6620 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:26.426235 systemd[1]: sshd@8-10.0.0.129:22-10.0.0.1:44074.service: Deactivated successfully. Jan 20 02:54:26.485004 kernel: audit: type=1106 audit(1768877666.352:749): pid=6620 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:26.485210 kernel: audit: type=1104 audit(1768877666.352:750): pid=6620 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:26.352000 audit[6620]: CRED_DISP pid=6620 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:26.502023 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 02:54:26.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.129:22-10.0.0.1:44074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:54:26.537418 systemd-logind[1612]: Session 9 logged out. Waiting for processes to exit. Jan 20 02:54:26.553272 systemd-logind[1612]: Removed session 9. Jan 20 02:54:27.770976 kubelet[2963]: E0120 02:54:27.763992 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:54:31.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.129:22-10.0.0.1:44270 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:54:31.559077 systemd[1]: Started sshd@9-10.0.0.129:22-10.0.0.1:44270.service - OpenSSH per-connection server daemon (10.0.0.1:44270). Jan 20 02:54:31.611729 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:54:31.611892 kernel: audit: type=1130 audit(1768877671.556:752): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.129:22-10.0.0.1:44270 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:54:32.421431 sshd[6665]: Accepted publickey for core from 10.0.0.1 port 44270 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:54:32.512271 kernel: audit: type=1101 audit(1768877672.418:753): pid=6665 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:32.418000 audit[6665]: USER_ACCT pid=6665 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:32.462980 sshd-session[6665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:54:32.437000 audit[6665]: CRED_ACQ pid=6665 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:32.568151 systemd-logind[1612]: New session 10 of user core. Jan 20 02:54:32.629707 kernel: audit: type=1103 audit(1768877672.437:754): pid=6665 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:32.629870 kernel: audit: type=1006 audit(1768877672.437:755): pid=6665 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 20 02:54:32.657049 kernel: audit: type=1300 audit(1768877672.437:755): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeef6d4e60 a2=3 a3=0 items=0 ppid=1 pid=6665 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:54:32.437000 audit[6665]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeef6d4e60 a2=3 a3=0 items=0 ppid=1 pid=6665 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:54:32.699024 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 02:54:32.437000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:54:32.758000 audit[6665]: USER_START pid=6665 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:32.878408 kernel: audit: type=1327 audit(1768877672.437:755): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:54:32.878618 kernel: audit: type=1105 audit(1768877672.758:756): pid=6665 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:32.787000 audit[6668]: CRED_ACQ pid=6668 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:32.966926 kernel: audit: type=1103 audit(1768877672.787:757): pid=6668 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:34.735928 sshd[6668]: Connection closed by 10.0.0.1 port 44270 Jan 20 02:54:34.743797 sshd-session[6665]: pam_unix(sshd:session): session closed for user core Jan 20 02:54:34.766000 audit[6665]: USER_END pid=6665 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:34.792370 systemd[1]: sshd@9-10.0.0.129:22-10.0.0.1:44270.service: Deactivated successfully. Jan 20 02:54:34.846073 kernel: audit: type=1106 audit(1768877674.766:758): pid=6665 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:34.820439 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 02:54:34.766000 audit[6665]: CRED_DISP pid=6665 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:34.859312 systemd-logind[1612]: Session 10 logged out. Waiting for processes to exit. Jan 20 02:54:34.869149 systemd-logind[1612]: Removed session 10. Jan 20 02:54:34.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.129:22-10.0.0.1:44270 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:54:34.897265 kernel: audit: type=1104 audit(1768877674.766:759): pid=6665 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:35.724207 kubelet[2963]: E0120 02:54:35.723356 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:54:36.768799 kubelet[2963]: E0120 02:54:36.766867 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:54:36.768799 kubelet[2963]: E0120 02:54:36.767189 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:54:36.768799 kubelet[2963]: E0120 02:54:36.767724 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:54:36.770019 kubelet[2963]: E0120 02:54:36.767831 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:54:39.775586 kubelet[2963]: E0120 02:54:39.774915 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:54:39.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.129:22-10.0.0.1:48910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:54:39.977614 systemd[1]: Started sshd@10-10.0.0.129:22-10.0.0.1:48910.service - OpenSSH per-connection server daemon (10.0.0.1:48910). Jan 20 02:54:40.016324 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:54:40.016422 kernel: audit: type=1130 audit(1768877679.975:761): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.129:22-10.0.0.1:48910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:54:40.537000 audit[6684]: USER_ACCT pid=6684 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:40.545779 sshd[6684]: Accepted publickey for core from 10.0.0.1 port 48910 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:54:40.549854 sshd-session[6684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:54:40.588018 kernel: audit: type=1101 audit(1768877680.537:762): pid=6684 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:40.588145 kernel: audit: type=1103 audit(1768877680.539:763): pid=6684 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:40.539000 audit[6684]: CRED_ACQ pid=6684 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:40.612587 systemd-logind[1612]: New session 11 of user core. Jan 20 02:54:40.641231 kernel: audit: type=1006 audit(1768877680.539:764): pid=6684 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 20 02:54:40.539000 audit[6684]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe919c0e10 a2=3 a3=0 items=0 ppid=1 pid=6684 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:54:40.701588 kernel: audit: type=1300 audit(1768877680.539:764): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe919c0e10 a2=3 a3=0 items=0 ppid=1 pid=6684 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:54:40.701799 kernel: audit: type=1327 audit(1768877680.539:764): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:54:40.539000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:54:40.712115 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 02:54:40.722000 audit[6684]: USER_START pid=6684 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:40.796667 kernel: audit: type=1105 audit(1768877680.722:765): pid=6684 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:40.729000 audit[6687]: CRED_ACQ pid=6687 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:40.838673 kernel: audit: type=1103 audit(1768877680.729:766): pid=6687 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:41.263997 sshd[6687]: Connection closed by 10.0.0.1 port 48910 Jan 20 02:54:41.273584 sshd-session[6684]: pam_unix(sshd:session): session closed for user core Jan 20 02:54:41.268000 audit[6684]: USER_END pid=6684 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:41.292213 systemd[1]: sshd@10-10.0.0.129:22-10.0.0.1:48910.service: Deactivated successfully. Jan 20 02:54:41.305755 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 02:54:41.340872 systemd-logind[1612]: Session 11 logged out. Waiting for processes to exit. Jan 20 02:54:41.374871 kernel: audit: type=1106 audit(1768877681.268:767): pid=6684 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:41.381110 systemd-logind[1612]: Removed session 11. Jan 20 02:54:41.268000 audit[6684]: CRED_DISP pid=6684 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:41.475315 kernel: audit: type=1104 audit(1768877681.268:768): pid=6684 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:41.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.129:22-10.0.0.1:48910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:54:43.801539 kubelet[2963]: E0120 02:54:43.797753 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:54:46.393304 systemd[1]: Started sshd@11-10.0.0.129:22-10.0.0.1:49712.service - OpenSSH per-connection server daemon (10.0.0.1:49712). Jan 20 02:54:46.497616 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:54:46.498563 kernel: audit: type=1130 audit(1768877686.405:770): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.129:22-10.0.0.1:49712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:54:46.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.129:22-10.0.0.1:49712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:54:47.077000 audit[6703]: USER_ACCT pid=6703 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:47.099851 sshd[6703]: Accepted publickey for core from 10.0.0.1 port 49712 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:54:47.172979 kernel: audit: type=1101 audit(1768877687.077:771): pid=6703 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:47.182000 audit[6703]: CRED_ACQ pid=6703 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:47.187795 sshd-session[6703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:54:47.241015 kernel: audit: type=1103 audit(1768877687.182:772): pid=6703 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:47.284180 systemd-logind[1612]: New session 12 of user core. Jan 20 02:54:47.287512 kernel: audit: type=1006 audit(1768877687.186:773): pid=6703 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 20 02:54:47.287633 kernel: audit: type=1300 audit(1768877687.186:773): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffebe2139d0 a2=3 a3=0 items=0 ppid=1 pid=6703 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:54:47.186000 audit[6703]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffebe2139d0 a2=3 a3=0 items=0 ppid=1 pid=6703 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:54:47.364631 kernel: audit: type=1327 audit(1768877687.186:773): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:54:47.186000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:54:47.399875 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 02:54:47.450000 audit[6703]: USER_START pid=6703 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:47.556566 kernel: audit: type=1105 audit(1768877687.450:774): pid=6703 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:47.482000 audit[6706]: CRED_ACQ pid=6706 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:47.662560 kernel: audit: type=1103 audit(1768877687.482:775): pid=6706 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:47.732596 kubelet[2963]: E0120 02:54:47.732341 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:54:47.759612 kubelet[2963]: E0120 02:54:47.758943 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:54:47.762627 kubelet[2963]: E0120 02:54:47.760918 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:54:48.392883 sshd[6706]: Connection closed by 10.0.0.1 port 49712 Jan 20 02:54:48.396720 sshd-session[6703]: pam_unix(sshd:session): session closed for user core Jan 20 02:54:48.421000 audit[6703]: USER_END pid=6703 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:48.447253 systemd-logind[1612]: Session 12 logged out. Waiting for processes to exit. Jan 20 02:54:48.449434 systemd[1]: sshd@11-10.0.0.129:22-10.0.0.1:49712.service: Deactivated successfully. Jan 20 02:54:48.508313 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 02:54:48.572924 kernel: audit: type=1106 audit(1768877688.421:776): pid=6703 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:48.573085 kernel: audit: type=1104 audit(1768877688.426:777): pid=6703 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:48.426000 audit[6703]: CRED_DISP pid=6703 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:48.662166 systemd-logind[1612]: Removed session 12. Jan 20 02:54:48.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.129:22-10.0.0.1:49712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:54:48.770590 kubelet[2963]: E0120 02:54:48.763064 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:54:50.721699 kubelet[2963]: E0120 02:54:50.721299 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:54:51.779421 kubelet[2963]: E0120 02:54:51.779077 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:54:53.678610 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:54:53.678884 kernel: audit: type=1130 audit(1768877693.631:779): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.129:22-10.0.0.1:49730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:54:53.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.129:22-10.0.0.1:49730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:54:53.641177 systemd[1]: Started sshd@12-10.0.0.129:22-10.0.0.1:49730.service - OpenSSH per-connection server daemon (10.0.0.1:49730). Jan 20 02:54:55.281288 sshd[6720]: Accepted publickey for core from 10.0.0.1 port 49730 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:54:55.280000 audit[6720]: USER_ACCT pid=6720 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:55.297872 sshd-session[6720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:54:55.363747 kernel: audit: type=1101 audit(1768877695.280:780): pid=6720 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:55.363870 kernel: audit: type=1103 audit(1768877695.295:781): pid=6720 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:55.295000 audit[6720]: CRED_ACQ pid=6720 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:55.416913 systemd-logind[1612]: New session 13 of user core. Jan 20 02:54:55.437559 kernel: audit: type=1006 audit(1768877695.295:782): pid=6720 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 20 02:54:55.437720 kernel: audit: type=1300 audit(1768877695.295:782): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe807a350 a2=3 a3=0 items=0 ppid=1 pid=6720 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:54:55.295000 audit[6720]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe807a350 a2=3 a3=0 items=0 ppid=1 pid=6720 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:54:55.295000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:54:55.558599 kernel: audit: type=1327 audit(1768877695.295:782): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:54:55.565887 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 02:54:55.608000 audit[6720]: USER_START pid=6720 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:55.725371 kernel: audit: type=1105 audit(1768877695.608:783): pid=6720 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:55.726317 kernel: audit: type=1103 audit(1768877695.627:784): pid=6723 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:55.627000 audit[6723]: CRED_ACQ pid=6723 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:58.015814 sshd[6723]: Connection closed by 10.0.0.1 port 49730 Jan 20 02:54:58.023848 sshd-session[6720]: pam_unix(sshd:session): session closed for user core Jan 20 02:54:58.056000 audit[6720]: USER_END pid=6720 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:58.083834 systemd-logind[1612]: Session 13 logged out. Waiting for processes to exit. Jan 20 02:54:58.131262 systemd[1]: sshd@12-10.0.0.129:22-10.0.0.1:49730.service: Deactivated successfully. Jan 20 02:54:58.151776 kernel: audit: type=1106 audit(1768877698.056:785): pid=6720 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:58.155179 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 02:54:58.170883 systemd-logind[1612]: Removed session 13. Jan 20 02:54:58.056000 audit[6720]: CRED_DISP pid=6720 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:58.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.129:22-10.0.0.1:49730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:54:58.222780 kernel: audit: type=1104 audit(1768877698.056:786): pid=6720 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:54:59.726280 kubelet[2963]: E0120 02:54:59.726188 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:54:59.753870 kubelet[2963]: E0120 02:54:59.753627 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:55:00.721015 kubelet[2963]: E0120 02:55:00.718713 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:55:01.731996 kubelet[2963]: E0120 02:55:01.727359 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:55:01.776053 kubelet[2963]: E0120 02:55:01.775951 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:55:03.059556 systemd[1]: Started sshd@13-10.0.0.129:22-10.0.0.1:54778.service - OpenSSH per-connection server daemon (10.0.0.1:54778). Jan 20 02:55:03.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.129:22-10.0.0.1:54778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:55:03.076836 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:55:03.076939 kernel: audit: type=1130 audit(1768877703.057:788): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.129:22-10.0.0.1:54778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:55:03.655000 audit[6766]: USER_ACCT pid=6766 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:03.701385 sshd[6766]: Accepted publickey for core from 10.0.0.1 port 54778 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:55:03.702021 kernel: audit: type=1101 audit(1768877703.655:789): pid=6766 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:03.701000 audit[6766]: CRED_ACQ pid=6766 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:03.702857 sshd-session[6766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:55:03.735540 kubelet[2963]: E0120 02:55:03.734985 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:55:03.782949 kernel: audit: type=1103 audit(1768877703.701:790): pid=6766 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:03.783082 kernel: audit: type=1006 audit(1768877703.701:791): pid=6766 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 20 02:55:03.783128 kernel: audit: type=1300 audit(1768877703.701:791): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff7855c6b0 a2=3 a3=0 items=0 ppid=1 pid=6766 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:55:03.701000 audit[6766]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff7855c6b0 a2=3 a3=0 items=0 ppid=1 pid=6766 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:55:03.782038 systemd-logind[1612]: New session 14 of user core. Jan 20 02:55:03.701000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:55:03.821591 kernel: audit: type=1327 audit(1768877703.701:791): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:55:03.828277 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 02:55:03.855000 audit[6766]: USER_START pid=6766 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:03.911556 kernel: audit: type=1105 audit(1768877703.855:792): pid=6766 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:03.864000 audit[6769]: CRED_ACQ pid=6769 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:03.961619 kernel: audit: type=1103 audit(1768877703.864:793): pid=6769 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:04.626456 sshd[6769]: Connection closed by 10.0.0.1 port 54778 Jan 20 02:55:04.623264 sshd-session[6766]: pam_unix(sshd:session): session closed for user core Jan 20 02:55:04.666596 kernel: audit: type=1106 audit(1768877704.633:794): pid=6766 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:04.633000 audit[6766]: USER_END pid=6766 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:04.678777 systemd[1]: sshd@13-10.0.0.129:22-10.0.0.1:54778.service: Deactivated successfully. Jan 20 02:55:04.633000 audit[6766]: CRED_DISP pid=6766 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:04.707049 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 02:55:04.713976 kernel: audit: type=1104 audit(1768877704.633:795): pid=6766 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:04.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.129:22-10.0.0.1:54778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:55:04.720656 systemd-logind[1612]: Session 14 logged out. Waiting for processes to exit. Jan 20 02:55:04.732229 systemd-logind[1612]: Removed session 14. Jan 20 02:55:05.753243 kubelet[2963]: E0120 02:55:05.746053 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:55:09.739639 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:55:09.739798 kernel: audit: type=1130 audit(1768877709.726:797): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.129:22-10.0.0.1:49714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:55:09.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.129:22-10.0.0.1:49714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:55:09.731592 systemd[1]: Started sshd@14-10.0.0.129:22-10.0.0.1:49714.service - OpenSSH per-connection server daemon (10.0.0.1:49714). Jan 20 02:55:10.560789 sshd[6785]: Accepted publickey for core from 10.0.0.1 port 49714 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:55:10.552000 audit[6785]: USER_ACCT pid=6785 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:10.570533 sshd-session[6785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:55:10.654412 kernel: audit: type=1101 audit(1768877710.552:798): pid=6785 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:10.654680 kernel: audit: type=1103 audit(1768877710.564:799): pid=6785 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:10.564000 audit[6785]: CRED_ACQ pid=6785 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:10.693893 systemd-logind[1612]: New session 15 of user core. Jan 20 02:55:10.753945 kernel: audit: type=1006 audit(1768877710.564:800): pid=6785 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 20 02:55:10.564000 audit[6785]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4212d050 a2=3 a3=0 items=0 ppid=1 pid=6785 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:55:10.791877 kernel: audit: type=1300 audit(1768877710.564:800): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4212d050 a2=3 a3=0 items=0 ppid=1 pid=6785 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:55:10.807003 kernel: audit: type=1327 audit(1768877710.564:800): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:55:10.564000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:55:10.796584 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 02:55:10.840000 audit[6785]: USER_START pid=6785 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:10.848000 audit[6788]: CRED_ACQ pid=6788 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:10.932684 kernel: audit: type=1105 audit(1768877710.840:801): pid=6785 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:10.932818 kernel: audit: type=1103 audit(1768877710.848:802): pid=6788 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:11.729938 sshd[6788]: Connection closed by 10.0.0.1 port 49714 Jan 20 02:55:11.728927 sshd-session[6785]: pam_unix(sshd:session): session closed for user core Jan 20 02:55:11.803699 kernel: audit: type=1106 audit(1768877711.758:803): pid=6785 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:11.758000 audit[6785]: USER_END pid=6785 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:11.814774 systemd[1]: sshd@14-10.0.0.129:22-10.0.0.1:49714.service: Deactivated successfully. Jan 20 02:55:11.892729 kernel: audit: type=1104 audit(1768877711.758:804): pid=6785 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:11.758000 audit[6785]: CRED_DISP pid=6785 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:11.894005 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 02:55:11.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.129:22-10.0.0.1:49714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:55:11.909339 systemd-logind[1612]: Session 15 logged out. Waiting for processes to exit. Jan 20 02:55:11.932973 systemd-logind[1612]: Removed session 15. Jan 20 02:55:12.725909 kubelet[2963]: E0120 02:55:12.725577 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:55:14.727399 kubelet[2963]: E0120 02:55:14.725047 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:55:14.775655 kubelet[2963]: E0120 02:55:14.763084 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:55:14.802229 kubelet[2963]: E0120 02:55:14.802054 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:55:15.731386 kubelet[2963]: E0120 02:55:15.731078 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:55:16.721392 kubelet[2963]: E0120 02:55:16.720576 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:55:16.877979 systemd[1]: Started sshd@15-10.0.0.129:22-10.0.0.1:33636.service - OpenSSH per-connection server daemon (10.0.0.1:33636). Jan 20 02:55:16.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.129:22-10.0.0.1:33636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:55:16.902978 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:55:16.912631 kernel: audit: type=1130 audit(1768877716.876:806): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.129:22-10.0.0.1:33636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:55:17.604000 audit[6803]: USER_ACCT pid=6803 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:17.623274 sshd[6803]: Accepted publickey for core from 10.0.0.1 port 33636 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:55:17.717432 kernel: audit: type=1101 audit(1768877717.604:807): pid=6803 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:17.718363 kernel: audit: type=1103 audit(1768877717.703:808): pid=6803 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:17.703000 audit[6803]: CRED_ACQ pid=6803 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:17.763105 sshd-session[6803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:55:17.801577 kernel: audit: type=1006 audit(1768877717.703:809): pid=6803 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 20 02:55:17.703000 audit[6803]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe1a978ff0 a2=3 a3=0 items=0 ppid=1 pid=6803 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:55:17.832045 systemd-logind[1612]: New session 16 of user core. Jan 20 02:55:17.937749 kernel: audit: type=1300 audit(1768877717.703:809): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe1a978ff0 a2=3 a3=0 items=0 ppid=1 pid=6803 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:55:17.703000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:55:17.987602 kernel: audit: type=1327 audit(1768877717.703:809): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:55:17.996033 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 02:55:18.018000 audit[6803]: USER_START pid=6803 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:18.142770 kernel: audit: type=1105 audit(1768877718.018:810): pid=6803 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:18.042000 audit[6812]: CRED_ACQ pid=6812 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:18.219757 kernel: audit: type=1103 audit(1768877718.042:811): pid=6812 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:18.757654 kubelet[2963]: E0120 02:55:18.744041 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:55:19.508061 sshd[6812]: Connection closed by 10.0.0.1 port 33636 Jan 20 02:55:19.510347 sshd-session[6803]: pam_unix(sshd:session): session closed for user core Jan 20 02:55:19.516000 audit[6803]: USER_END pid=6803 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:19.542116 systemd[1]: sshd@15-10.0.0.129:22-10.0.0.1:33636.service: Deactivated successfully. Jan 20 02:55:19.571149 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 02:55:19.584952 systemd-logind[1612]: Session 16 logged out. Waiting for processes to exit. Jan 20 02:55:19.594067 systemd-logind[1612]: Removed session 16. Jan 20 02:55:19.516000 audit[6803]: CRED_DISP pid=6803 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:19.636386 kernel: audit: type=1106 audit(1768877719.516:812): pid=6803 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:19.636536 kernel: audit: type=1104 audit(1768877719.516:813): pid=6803 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:19.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.129:22-10.0.0.1:33636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:55:23.729419 kubelet[2963]: E0120 02:55:23.727872 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:55:24.406414 update_engine[1617]: I20260120 02:55:24.402889 1617 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 20 02:55:24.406414 update_engine[1617]: I20260120 02:55:24.403026 1617 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 20 02:55:24.440781 update_engine[1617]: I20260120 02:55:24.423689 1617 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 20 02:55:24.440781 update_engine[1617]: I20260120 02:55:24.433686 1617 omaha_request_params.cc:62] Current group set to beta Jan 20 02:55:24.440781 update_engine[1617]: I20260120 02:55:24.433961 1617 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 20 02:55:24.440781 update_engine[1617]: I20260120 02:55:24.433983 1617 update_attempter.cc:643] Scheduling an action processor start. Jan 20 02:55:24.440781 update_engine[1617]: I20260120 02:55:24.434010 1617 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 20 02:55:24.440781 update_engine[1617]: I20260120 02:55:24.434089 1617 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 20 02:55:24.440781 update_engine[1617]: I20260120 02:55:24.434187 1617 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 20 02:55:24.440781 update_engine[1617]: I20260120 02:55:24.434253 1617 omaha_request_action.cc:272] Request: Jan 20 02:55:24.440781 update_engine[1617]: Jan 20 02:55:24.440781 update_engine[1617]: Jan 20 02:55:24.440781 update_engine[1617]: Jan 20 02:55:24.440781 update_engine[1617]: Jan 20 02:55:24.440781 update_engine[1617]: Jan 20 02:55:24.440781 update_engine[1617]: Jan 20 02:55:24.440781 update_engine[1617]: Jan 20 02:55:24.440781 update_engine[1617]: Jan 20 02:55:24.440781 update_engine[1617]: I20260120 02:55:24.434265 1617 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:55:24.515241 update_engine[1617]: I20260120 02:55:24.514015 1617 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:55:24.515241 update_engine[1617]: I20260120 02:55:24.515017 1617 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:55:24.540882 update_engine[1617]: E20260120 02:55:24.540800 1617 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 02:55:24.545454 update_engine[1617]: I20260120 02:55:24.540956 1617 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 20 02:55:24.668685 locksmithd[1689]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 20 02:55:24.831565 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:55:24.831723 kernel: audit: type=1130 audit(1768877724.720:815): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.129:22-10.0.0.1:37462 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:55:24.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.129:22-10.0.0.1:37462 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:55:24.723691 systemd[1]: Started sshd@16-10.0.0.129:22-10.0.0.1:37462.service - OpenSSH per-connection server daemon (10.0.0.1:37462). Jan 20 02:55:24.910594 kubelet[2963]: E0120 02:55:24.904950 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:55:25.239000 audit[6837]: USER_ACCT pid=6837 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:25.270261 sshd[6837]: Accepted publickey for core from 10.0.0.1 port 37462 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:55:25.276911 kernel: audit: type=1101 audit(1768877725.239:816): pid=6837 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:25.279314 sshd-session[6837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:55:25.278000 audit[6837]: CRED_ACQ pid=6837 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:25.339458 kernel: audit: type=1103 audit(1768877725.278:817): pid=6837 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:25.489318 kernel: audit: type=1006 audit(1768877725.278:818): pid=6837 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 20 02:55:25.278000 audit[6837]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffed5826d0 a2=3 a3=0 items=0 ppid=1 pid=6837 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:55:25.572000 kernel: audit: type=1300 audit(1768877725.278:818): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffed5826d0 a2=3 a3=0 items=0 ppid=1 pid=6837 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:55:25.278000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:55:25.693392 kernel: audit: type=1327 audit(1768877725.278:818): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:55:25.699462 systemd-logind[1612]: New session 17 of user core. Jan 20 02:55:25.732643 kubelet[2963]: E0120 02:55:25.732348 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:55:25.781441 containerd[1640]: time="2026-01-20T02:55:25.774374035Z" level=info msg="container event discarded" container=5289b65548ac5de6a984e5ba32993b7200563eb849a240f79cdf9cbdfe5063ef type=CONTAINER_CREATED_EVENT Jan 20 02:55:25.842341 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 02:55:25.928577 kernel: audit: type=1105 audit(1768877725.860:819): pid=6837 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:25.860000 audit[6837]: USER_START pid=6837 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:25.943000 audit[6840]: CRED_ACQ pid=6840 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:26.008364 kernel: audit: type=1103 audit(1768877725.943:820): pid=6840 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:26.727322 kubelet[2963]: E0120 02:55:26.723730 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:55:26.947388 containerd[1640]: time="2026-01-20T02:55:26.941602366Z" level=info msg="container event discarded" container=5289b65548ac5de6a984e5ba32993b7200563eb849a240f79cdf9cbdfe5063ef type=CONTAINER_STARTED_EVENT Jan 20 02:55:27.174422 sshd[6840]: Connection closed by 10.0.0.1 port 37462 Jan 20 02:55:27.186907 sshd-session[6837]: pam_unix(sshd:session): session closed for user core Jan 20 02:55:27.192000 audit[6837]: USER_END pid=6837 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:27.201000 audit[6837]: CRED_DISP pid=6837 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:27.285068 systemd[1]: sshd@16-10.0.0.129:22-10.0.0.1:37462.service: Deactivated successfully. Jan 20 02:55:27.314578 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 02:55:27.344358 kernel: audit: type=1106 audit(1768877727.192:821): pid=6837 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:27.345230 kernel: audit: type=1104 audit(1768877727.201:822): pid=6837 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:27.344577 systemd-logind[1612]: Session 17 logged out. Waiting for processes to exit. Jan 20 02:55:27.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.129:22-10.0.0.1:37462 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:55:27.361634 systemd-logind[1612]: Removed session 17. Jan 20 02:55:28.769421 kubelet[2963]: E0120 02:55:28.768623 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:55:28.805551 kubelet[2963]: E0120 02:55:28.805053 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:55:30.756364 kubelet[2963]: E0120 02:55:30.756243 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:55:32.317212 systemd[1]: Started sshd@17-10.0.0.129:22-10.0.0.1:37624.service - OpenSSH per-connection server daemon (10.0.0.1:37624). Jan 20 02:55:32.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.129:22-10.0.0.1:37624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:55:32.372377 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:55:32.375330 kernel: audit: type=1130 audit(1768877732.328:824): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.129:22-10.0.0.1:37624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:55:32.723264 kubelet[2963]: E0120 02:55:32.714939 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:55:32.787742 kubelet[2963]: E0120 02:55:32.787639 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:55:33.131000 audit[6880]: USER_ACCT pid=6880 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:33.146439 sshd[6880]: Accepted publickey for core from 10.0.0.1 port 37624 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:55:33.193685 sshd-session[6880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:55:33.167000 audit[6880]: CRED_ACQ pid=6880 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:33.244726 systemd-logind[1612]: New session 18 of user core. Jan 20 02:55:33.282672 kernel: audit: type=1101 audit(1768877733.131:825): pid=6880 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:33.282833 kernel: audit: type=1103 audit(1768877733.167:826): pid=6880 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:33.282883 kernel: audit: type=1006 audit(1768877733.167:827): pid=6880 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 20 02:55:33.167000 audit[6880]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff8211b7c0 a2=3 a3=0 items=0 ppid=1 pid=6880 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:55:33.383727 kernel: audit: type=1300 audit(1768877733.167:827): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff8211b7c0 a2=3 a3=0 items=0 ppid=1 pid=6880 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:55:33.383880 kernel: audit: type=1327 audit(1768877733.167:827): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:55:33.167000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:55:33.397247 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 02:55:33.444000 audit[6880]: USER_START pid=6880 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:33.477000 audit[6890]: CRED_ACQ pid=6890 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:33.568293 kernel: audit: type=1105 audit(1768877733.444:828): pid=6880 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:33.568541 kernel: audit: type=1103 audit(1768877733.477:829): pid=6890 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:34.982092 update_engine[1617]: I20260120 02:55:34.979700 1617 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:55:34.982092 update_engine[1617]: I20260120 02:55:34.979900 1617 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:55:34.990454 update_engine[1617]: I20260120 02:55:34.988986 1617 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:55:35.029267 update_engine[1617]: E20260120 02:55:35.025918 1617 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 02:55:35.029267 update_engine[1617]: I20260120 02:55:35.026051 1617 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 20 02:55:35.571544 sshd[6890]: Connection closed by 10.0.0.1 port 37624 Jan 20 02:55:35.616096 sshd-session[6880]: pam_unix(sshd:session): session closed for user core Jan 20 02:55:35.643000 audit[6880]: USER_END pid=6880 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:35.715469 systemd[1]: sshd@17-10.0.0.129:22-10.0.0.1:37624.service: Deactivated successfully. Jan 20 02:55:35.746099 kernel: audit: type=1106 audit(1768877735.643:830): pid=6880 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:35.643000 audit[6880]: CRED_DISP pid=6880 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:35.817425 kernel: audit: type=1104 audit(1768877735.643:831): pid=6880 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:35.771076 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 02:55:35.792737 systemd-logind[1612]: Session 18 logged out. Waiting for processes to exit. Jan 20 02:55:35.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.129:22-10.0.0.1:37624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:55:35.872453 systemd-logind[1612]: Removed session 18. Jan 20 02:55:37.731744 kubelet[2963]: E0120 02:55:37.724184 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:55:38.727549 kubelet[2963]: E0120 02:55:38.722047 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:55:39.580963 containerd[1640]: time="2026-01-20T02:55:39.579585346Z" level=info msg="container event discarded" container=94298c98ceb717fb8dc281d15a62ec0679639822efa0efbf09b3db4fd54049d5 type=CONTAINER_CREATED_EVENT Jan 20 02:55:39.580963 containerd[1640]: time="2026-01-20T02:55:39.579700510Z" level=info msg="container event discarded" container=94298c98ceb717fb8dc281d15a62ec0679639822efa0efbf09b3db4fd54049d5 type=CONTAINER_STARTED_EVENT Jan 20 02:55:39.741999 containerd[1640]: time="2026-01-20T02:55:39.730958784Z" level=info msg="container event discarded" container=7b37c1571772930cd97ab19f3ac0dc463a541b9009e430014d6cb6d88c28c1e3 type=CONTAINER_CREATED_EVENT Jan 20 02:55:39.741999 containerd[1640]: time="2026-01-20T02:55:39.731002685Z" level=info msg="container event discarded" container=7b37c1571772930cd97ab19f3ac0dc463a541b9009e430014d6cb6d88c28c1e3 type=CONTAINER_STARTED_EVENT Jan 20 02:55:39.763070 kubelet[2963]: E0120 02:55:39.744447 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:55:39.827871 containerd[1640]: time="2026-01-20T02:55:39.824325423Z" level=info msg="container event discarded" container=b9f3d9d99bdc570b2d630ce0072d79b533c2f553e8776e0cf0bf5c15db0d6663 type=CONTAINER_CREATED_EVENT Jan 20 02:55:39.872444 containerd[1640]: time="2026-01-20T02:55:39.870667594Z" level=info msg="container event discarded" container=4c6a6d9be2bdf703d27ca53edab10a825c2986b9aab44cb29800841e8c5dba5f type=CONTAINER_CREATED_EVENT Jan 20 02:55:40.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.129:22-10.0.0.1:47614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:55:40.661064 systemd[1]: Started sshd@18-10.0.0.129:22-10.0.0.1:47614.service - OpenSSH per-connection server daemon (10.0.0.1:47614). Jan 20 02:55:40.744398 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:55:40.744643 kernel: audit: type=1130 audit(1768877740.659:833): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.129:22-10.0.0.1:47614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:55:40.754938 containerd[1640]: time="2026-01-20T02:55:40.752815187Z" level=info msg="container event discarded" container=0331213baf44381cbe64cfdd73f03274771822323f6a0252f1fc3946557c63de type=CONTAINER_CREATED_EVENT Jan 20 02:55:40.754938 containerd[1640]: time="2026-01-20T02:55:40.752868595Z" level=info msg="container event discarded" container=0331213baf44381cbe64cfdd73f03274771822323f6a0252f1fc3946557c63de type=CONTAINER_STARTED_EVENT Jan 20 02:55:40.754938 containerd[1640]: time="2026-01-20T02:55:40.752881960Z" level=info msg="container event discarded" container=b9f3d9d99bdc570b2d630ce0072d79b533c2f553e8776e0cf0bf5c15db0d6663 type=CONTAINER_STARTED_EVENT Jan 20 02:55:40.754938 containerd[1640]: time="2026-01-20T02:55:40.752893081Z" level=info msg="container event discarded" container=4c6a6d9be2bdf703d27ca53edab10a825c2986b9aab44cb29800841e8c5dba5f type=CONTAINER_STARTED_EVENT Jan 20 02:55:41.032893 containerd[1640]: time="2026-01-20T02:55:40.992420854Z" level=info msg="container event discarded" container=631ea0caf9b95ba2054068a47cf706665ac80082b2feb6ef8a03b36b345a4f96 type=CONTAINER_CREATED_EVENT Jan 20 02:55:41.032893 containerd[1640]: time="2026-01-20T02:55:40.992563138Z" level=info msg="container event discarded" container=631ea0caf9b95ba2054068a47cf706665ac80082b2feb6ef8a03b36b345a4f96 type=CONTAINER_STARTED_EVENT Jan 20 02:55:41.202000 audit[6909]: USER_ACCT pid=6909 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:41.215251 sshd[6909]: Accepted publickey for core from 10.0.0.1 port 47614 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:55:41.231540 sshd-session[6909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:55:41.290096 kernel: audit: type=1101 audit(1768877741.202:834): pid=6909 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:41.220000 audit[6909]: CRED_ACQ pid=6909 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:41.407619 kernel: audit: type=1103 audit(1768877741.220:835): pid=6909 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:41.407772 kernel: audit: type=1006 audit(1768877741.220:836): pid=6909 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 20 02:55:41.220000 audit[6909]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcf0063b70 a2=3 a3=0 items=0 ppid=1 pid=6909 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:55:41.415372 systemd-logind[1612]: New session 19 of user core. Jan 20 02:55:41.220000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:55:41.525785 kernel: audit: type=1300 audit(1768877741.220:836): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcf0063b70 a2=3 a3=0 items=0 ppid=1 pid=6909 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:55:41.525943 kernel: audit: type=1327 audit(1768877741.220:836): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:55:41.551679 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 02:55:41.619000 audit[6909]: USER_START pid=6909 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:41.646000 audit[6912]: CRED_ACQ pid=6912 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:41.751301 kernel: audit: type=1105 audit(1768877741.619:837): pid=6909 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:41.751451 kernel: audit: type=1103 audit(1768877741.646:838): pid=6912 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:41.776600 kubelet[2963]: E0120 02:55:41.771991 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:55:41.779714 kubelet[2963]: E0120 02:55:41.778156 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:55:42.516271 sshd[6912]: Connection closed by 10.0.0.1 port 47614 Jan 20 02:55:42.528434 sshd-session[6909]: pam_unix(sshd:session): session closed for user core Jan 20 02:55:42.544000 audit[6909]: USER_END pid=6909 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:42.587412 systemd[1]: sshd@18-10.0.0.129:22-10.0.0.1:47614.service: Deactivated successfully. Jan 20 02:55:42.593994 systemd-logind[1612]: Session 19 logged out. Waiting for processes to exit. Jan 20 02:55:42.623292 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 02:55:42.629402 kernel: audit: type=1106 audit(1768877742.544:839): pid=6909 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:42.544000 audit[6909]: CRED_DISP pid=6909 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:42.637984 systemd-logind[1612]: Removed session 19. Jan 20 02:55:42.704898 kernel: audit: type=1104 audit(1768877742.544:840): pid=6909 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:42.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.129:22-10.0.0.1:47614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:55:42.781713 containerd[1640]: time="2026-01-20T02:55:42.781382102Z" level=info msg="container event discarded" container=802ee81c76844d246546e532fe81fe62117202abb2619d283f45fd4f5cf60961 type=CONTAINER_CREATED_EVENT Jan 20 02:55:42.781713 containerd[1640]: time="2026-01-20T02:55:42.781540456Z" level=info msg="container event discarded" container=802ee81c76844d246546e532fe81fe62117202abb2619d283f45fd4f5cf60961 type=CONTAINER_STARTED_EVENT Jan 20 02:55:43.397215 containerd[1640]: time="2026-01-20T02:55:43.397077981Z" level=info msg="container event discarded" container=c95f893b0b626594c8fd1bebee9e6dd5568a21608e838d9ef6c0b27756415671 type=CONTAINER_CREATED_EVENT Jan 20 02:55:43.397215 containerd[1640]: time="2026-01-20T02:55:43.397177225Z" level=info msg="container event discarded" container=c95f893b0b626594c8fd1bebee9e6dd5568a21608e838d9ef6c0b27756415671 type=CONTAINER_STARTED_EVENT Jan 20 02:55:43.731667 kubelet[2963]: E0120 02:55:43.729252 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:55:44.981759 update_engine[1617]: I20260120 02:55:44.977783 1617 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:55:44.981759 update_engine[1617]: I20260120 02:55:44.977900 1617 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:55:44.981759 update_engine[1617]: I20260120 02:55:44.981691 1617 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:55:44.999330 update_engine[1617]: E20260120 02:55:44.999194 1617 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 02:55:44.999733 update_engine[1617]: I20260120 02:55:44.999701 1617 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 20 02:55:46.803559 containerd[1640]: time="2026-01-20T02:55:46.803237743Z" level=info msg="container event discarded" container=2d18d9b0c2471314f43034a3f1284a859655b8b6b1888ba7177e52159aac19ff type=CONTAINER_CREATED_EVENT Jan 20 02:55:46.803559 containerd[1640]: time="2026-01-20T02:55:46.803313825Z" level=info msg="container event discarded" container=2d18d9b0c2471314f43034a3f1284a859655b8b6b1888ba7177e52159aac19ff type=CONTAINER_STARTED_EVENT Jan 20 02:55:47.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.129:22-10.0.0.1:36164 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:55:47.693906 systemd[1]: Started sshd@19-10.0.0.129:22-10.0.0.1:36164.service - OpenSSH per-connection server daemon (10.0.0.1:36164). Jan 20 02:55:47.751007 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:55:47.759256 kernel: audit: type=1130 audit(1768877747.706:842): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.129:22-10.0.0.1:36164 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:55:47.816923 kubelet[2963]: E0120 02:55:47.816347 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:55:47.966917 kubelet[2963]: E0120 02:55:47.959847 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:55:48.314909 containerd[1640]: time="2026-01-20T02:55:48.313641316Z" level=info msg="container event discarded" container=68bf5dadbd87a719294f61bd0fe3be88ef9de4cf8b71d32ae6fd2afbe8f33a7b type=CONTAINER_CREATED_EVENT Jan 20 02:55:48.314909 containerd[1640]: time="2026-01-20T02:55:48.313745561Z" level=info msg="container event discarded" container=68bf5dadbd87a719294f61bd0fe3be88ef9de4cf8b71d32ae6fd2afbe8f33a7b type=CONTAINER_STARTED_EVENT Jan 20 02:55:48.480000 audit[6926]: USER_ACCT pid=6926 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:48.499670 sshd-session[6926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:55:48.505690 sshd[6926]: Accepted publickey for core from 10.0.0.1 port 36164 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:55:48.582817 kernel: audit: type=1101 audit(1768877748.480:843): pid=6926 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:48.582981 kernel: audit: type=1103 audit(1768877748.497:844): pid=6926 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:48.497000 audit[6926]: CRED_ACQ pid=6926 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:48.599348 systemd-logind[1612]: New session 20 of user core. Jan 20 02:55:48.688882 kernel: audit: type=1006 audit(1768877748.497:845): pid=6926 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 20 02:55:48.497000 audit[6926]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc292b8920 a2=3 a3=0 items=0 ppid=1 pid=6926 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:55:48.700271 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 02:55:48.819331 kernel: audit: type=1300 audit(1768877748.497:845): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc292b8920 a2=3 a3=0 items=0 ppid=1 pid=6926 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:55:48.819556 kernel: audit: type=1327 audit(1768877748.497:845): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:55:48.497000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:55:48.719000 audit[6926]: USER_START pid=6926 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:48.916128 kernel: audit: type=1105 audit(1768877748.719:846): pid=6926 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:48.916286 kernel: audit: type=1103 audit(1768877748.732:847): pid=6929 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:48.732000 audit[6929]: CRED_ACQ pid=6929 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:49.607261 sshd[6929]: Connection closed by 10.0.0.1 port 36164 Jan 20 02:55:49.608426 sshd-session[6926]: pam_unix(sshd:session): session closed for user core Jan 20 02:55:49.615000 audit[6926]: USER_END pid=6926 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:49.624967 systemd[1]: sshd@19-10.0.0.129:22-10.0.0.1:36164.service: Deactivated successfully. Jan 20 02:55:49.636536 kernel: audit: type=1106 audit(1768877749.615:848): pid=6926 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:49.615000 audit[6926]: CRED_DISP pid=6926 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:49.673788 kernel: audit: type=1104 audit(1768877749.615:849): pid=6926 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:49.652950 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 02:55:49.664284 systemd-logind[1612]: Session 20 logged out. Waiting for processes to exit. Jan 20 02:55:49.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.129:22-10.0.0.1:36164 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:55:49.680872 systemd-logind[1612]: Removed session 20. Jan 20 02:55:50.725269 kubelet[2963]: E0120 02:55:50.723534 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:55:51.737581 kubelet[2963]: E0120 02:55:51.734824 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:55:53.805591 kubelet[2963]: E0120 02:55:53.797416 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:55:54.719240 kubelet[2963]: E0120 02:55:54.717843 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:55:54.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.129:22-10.0.0.1:56296 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:55:54.748221 systemd[1]: Started sshd@20-10.0.0.129:22-10.0.0.1:56296.service - OpenSSH per-connection server daemon (10.0.0.1:56296). Jan 20 02:55:54.812549 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:55:54.812686 kernel: audit: type=1130 audit(1768877754.746:851): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.129:22-10.0.0.1:56296 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:55:54.978668 update_engine[1617]: I20260120 02:55:54.974027 1617 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:55:54.978668 update_engine[1617]: I20260120 02:55:54.974181 1617 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:55:54.982215 update_engine[1617]: I20260120 02:55:54.981949 1617 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:55:55.021217 update_engine[1617]: E20260120 02:55:55.020334 1617 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 02:55:55.021217 update_engine[1617]: I20260120 02:55:55.020573 1617 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 20 02:55:55.021217 update_engine[1617]: I20260120 02:55:55.020591 1617 omaha_request_action.cc:617] Omaha request response: Jan 20 02:55:55.021217 update_engine[1617]: E20260120 02:55:55.020718 1617 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 20 02:55:55.021217 update_engine[1617]: I20260120 02:55:55.020749 1617 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 20 02:55:55.021217 update_engine[1617]: I20260120 02:55:55.020761 1617 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 02:55:55.021217 update_engine[1617]: I20260120 02:55:55.020769 1617 update_attempter.cc:306] Processing Done. Jan 20 02:55:55.021217 update_engine[1617]: E20260120 02:55:55.020789 1617 update_attempter.cc:619] Update failed. Jan 20 02:55:55.021217 update_engine[1617]: I20260120 02:55:55.020800 1617 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 20 02:55:55.021217 update_engine[1617]: I20260120 02:55:55.020810 1617 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 20 02:55:55.021217 update_engine[1617]: I20260120 02:55:55.020819 1617 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 20 02:55:55.021217 update_engine[1617]: I20260120 02:55:55.020911 1617 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 20 02:55:55.021217 update_engine[1617]: I20260120 02:55:55.020943 1617 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 20 02:55:55.021217 update_engine[1617]: I20260120 02:55:55.020952 1617 omaha_request_action.cc:272] Request: Jan 20 02:55:55.021217 update_engine[1617]: Jan 20 02:55:55.021217 update_engine[1617]: Jan 20 02:55:55.021993 update_engine[1617]: Jan 20 02:55:55.021993 update_engine[1617]: Jan 20 02:55:55.021993 update_engine[1617]: Jan 20 02:55:55.021993 update_engine[1617]: Jan 20 02:55:55.021993 update_engine[1617]: I20260120 02:55:55.020962 1617 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:55:55.021993 update_engine[1617]: I20260120 02:55:55.020991 1617 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:55:55.021993 update_engine[1617]: I20260120 02:55:55.021541 1617 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:55:55.031286 locksmithd[1689]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 20 02:55:55.055306 update_engine[1617]: E20260120 02:55:55.054875 1617 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 02:55:55.055306 update_engine[1617]: I20260120 02:55:55.055013 1617 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 20 02:55:55.055306 update_engine[1617]: I20260120 02:55:55.055030 1617 omaha_request_action.cc:617] Omaha request response: Jan 20 02:55:55.055306 update_engine[1617]: I20260120 02:55:55.055042 1617 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 02:55:55.055306 update_engine[1617]: I20260120 02:55:55.055100 1617 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 02:55:55.055306 update_engine[1617]: I20260120 02:55:55.055115 1617 update_attempter.cc:306] Processing Done. Jan 20 02:55:55.055306 update_engine[1617]: I20260120 02:55:55.055126 1617 update_attempter.cc:310] Error event sent. Jan 20 02:55:55.055306 update_engine[1617]: I20260120 02:55:55.055141 1617 update_check_scheduler.cc:74] Next update check in 44m23s Jan 20 02:55:55.071149 locksmithd[1689]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 20 02:55:55.179000 audit[6943]: USER_ACCT pid=6943 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:55.208015 sshd-session[6943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:55:55.196000 audit[6943]: CRED_ACQ pid=6943 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:55.273805 sshd[6943]: Accepted publickey for core from 10.0.0.1 port 56296 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:55:55.274871 systemd-logind[1612]: New session 21 of user core. Jan 20 02:55:55.313259 kernel: audit: type=1101 audit(1768877755.179:852): pid=6943 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:55.313405 kernel: audit: type=1103 audit(1768877755.196:853): pid=6943 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:55.319171 kernel: audit: type=1006 audit(1768877755.196:854): pid=6943 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 20 02:55:55.196000 audit[6943]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffefc810660 a2=3 a3=0 items=0 ppid=1 pid=6943 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:55:55.338935 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 02:55:55.407725 kernel: audit: type=1300 audit(1768877755.196:854): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffefc810660 a2=3 a3=0 items=0 ppid=1 pid=6943 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:55:55.407854 kernel: audit: type=1327 audit(1768877755.196:854): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:55:55.196000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:55:55.494144 kernel: audit: type=1105 audit(1768877755.371:855): pid=6943 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:55.371000 audit[6943]: USER_START pid=6943 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:55.399000 audit[6946]: CRED_ACQ pid=6946 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:55.552044 kernel: audit: type=1103 audit(1768877755.399:856): pid=6946 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:56.722206 sshd[6946]: Connection closed by 10.0.0.1 port 56296 Jan 20 02:55:56.738755 sshd-session[6943]: pam_unix(sshd:session): session closed for user core Jan 20 02:55:56.751000 audit[6943]: USER_END pid=6943 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:56.789930 systemd[1]: sshd@20-10.0.0.129:22-10.0.0.1:56296.service: Deactivated successfully. Jan 20 02:55:56.807034 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 02:55:56.851630 kernel: audit: type=1106 audit(1768877756.751:857): pid=6943 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:56.751000 audit[6943]: CRED_DISP pid=6943 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:56.928651 systemd-logind[1612]: Session 21 logged out. Waiting for processes to exit. Jan 20 02:55:56.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.129:22-10.0.0.1:56296 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:55:56.960466 kernel: audit: type=1104 audit(1768877756.751:858): pid=6943 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:55:56.962656 systemd-logind[1612]: Removed session 21. Jan 20 02:55:58.720463 kubelet[2963]: E0120 02:55:58.719948 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:55:58.729362 kubelet[2963]: E0120 02:55:58.729111 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:56:01.813644 systemd[1]: Started sshd@21-10.0.0.129:22-10.0.0.1:56430.service - OpenSSH per-connection server daemon (10.0.0.1:56430). Jan 20 02:56:01.891188 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:56:01.891374 kernel: audit: type=1130 audit(1768877761.825:860): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.129:22-10.0.0.1:56430 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:56:01.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.129:22-10.0.0.1:56430 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:56:02.294288 sshd[6982]: Accepted publickey for core from 10.0.0.1 port 56430 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:56:02.288000 audit[6982]: USER_ACCT pid=6982 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:02.317350 sshd-session[6982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:56:02.306000 audit[6982]: CRED_ACQ pid=6982 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:02.386162 systemd-logind[1612]: New session 22 of user core. Jan 20 02:56:02.421763 kernel: audit: type=1101 audit(1768877762.288:861): pid=6982 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:02.421894 kernel: audit: type=1103 audit(1768877762.306:862): pid=6982 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:02.424254 kernel: audit: type=1006 audit(1768877762.306:863): pid=6982 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 20 02:56:02.306000 audit[6982]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffad4150f0 a2=3 a3=0 items=0 ppid=1 pid=6982 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:56:02.532429 kernel: audit: type=1300 audit(1768877762.306:863): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffad4150f0 a2=3 a3=0 items=0 ppid=1 pid=6982 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:56:02.306000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:56:02.535922 kernel: audit: type=1327 audit(1768877762.306:863): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:56:02.587253 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 02:56:02.623000 audit[6982]: USER_START pid=6982 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:02.743535 kernel: audit: type=1105 audit(1768877762.623:864): pid=6982 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:02.743682 kernel: audit: type=1103 audit(1768877762.681:865): pid=6987 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:02.681000 audit[6987]: CRED_ACQ pid=6987 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:02.743840 kubelet[2963]: E0120 02:56:02.736297 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:56:02.743840 kubelet[2963]: E0120 02:56:02.742702 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:56:03.417756 sshd[6987]: Connection closed by 10.0.0.1 port 56430 Jan 20 02:56:03.419433 sshd-session[6982]: pam_unix(sshd:session): session closed for user core Jan 20 02:56:03.434000 audit[6982]: USER_END pid=6982 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:03.485895 systemd[1]: sshd@21-10.0.0.129:22-10.0.0.1:56430.service: Deactivated successfully. Jan 20 02:56:03.504739 systemd-logind[1612]: Session 22 logged out. Waiting for processes to exit. Jan 20 02:56:03.505948 kernel: audit: type=1106 audit(1768877763.434:866): pid=6982 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:03.434000 audit[6982]: CRED_DISP pid=6982 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:03.522403 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 02:56:03.578597 systemd-logind[1612]: Removed session 22. Jan 20 02:56:03.592332 kernel: audit: type=1104 audit(1768877763.434:867): pid=6982 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:03.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.129:22-10.0.0.1:56430 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:56:03.760783 kubelet[2963]: E0120 02:56:03.748974 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:56:04.747627 kubelet[2963]: E0120 02:56:04.746649 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:56:07.737150 kubelet[2963]: E0120 02:56:07.735903 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:56:08.479457 systemd[1]: Started sshd@22-10.0.0.129:22-10.0.0.1:44944.service - OpenSSH per-connection server daemon (10.0.0.1:44944). Jan 20 02:56:08.543557 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:56:08.544386 kernel: audit: type=1130 audit(1768877768.477:869): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.129:22-10.0.0.1:44944 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:56:08.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.129:22-10.0.0.1:44944 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:56:08.932000 audit[7004]: USER_ACCT pid=7004 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:08.941861 sshd[7004]: Accepted publickey for core from 10.0.0.1 port 44944 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:56:08.961908 sshd-session[7004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:56:08.952000 audit[7004]: CRED_ACQ pid=7004 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:09.032580 kernel: audit: type=1101 audit(1768877768.932:870): pid=7004 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:09.032725 kernel: audit: type=1103 audit(1768877768.952:871): pid=7004 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:09.032771 kernel: audit: type=1006 audit(1768877768.952:872): pid=7004 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 20 02:56:09.023700 systemd-logind[1612]: New session 23 of user core. Jan 20 02:56:09.060612 kernel: audit: type=1300 audit(1768877768.952:872): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd450325b0 a2=3 a3=0 items=0 ppid=1 pid=7004 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:56:08.952000 audit[7004]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd450325b0 a2=3 a3=0 items=0 ppid=1 pid=7004 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:56:09.110340 kernel: audit: type=1327 audit(1768877768.952:872): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:56:08.952000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:56:09.142104 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 02:56:09.183000 audit[7004]: USER_START pid=7004 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:09.230059 kernel: audit: type=1105 audit(1768877769.183:873): pid=7004 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:09.193000 audit[7012]: CRED_ACQ pid=7012 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:09.282196 kernel: audit: type=1103 audit(1768877769.193:874): pid=7012 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:10.403311 sshd[7012]: Connection closed by 10.0.0.1 port 44944 Jan 20 02:56:10.409444 sshd-session[7004]: pam_unix(sshd:session): session closed for user core Jan 20 02:56:10.425000 audit[7004]: USER_END pid=7004 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:10.497272 systemd[1]: sshd@22-10.0.0.129:22-10.0.0.1:44944.service: Deactivated successfully. Jan 20 02:56:10.504868 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 02:56:10.601964 kernel: audit: type=1106 audit(1768877770.425:875): pid=7004 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:10.602352 kernel: audit: type=1104 audit(1768877770.462:876): pid=7004 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:10.462000 audit[7004]: CRED_DISP pid=7004 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:10.532738 systemd-logind[1612]: Session 23 logged out. Waiting for processes to exit. Jan 20 02:56:10.609174 systemd-logind[1612]: Removed session 23. Jan 20 02:56:10.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.129:22-10.0.0.1:44944 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:56:10.725350 kubelet[2963]: E0120 02:56:10.725201 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:56:11.729915 kubelet[2963]: E0120 02:56:11.727734 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:56:15.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.129:22-10.0.0.1:42764 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:56:15.557840 systemd[1]: Started sshd@23-10.0.0.129:22-10.0.0.1:42764.service - OpenSSH per-connection server daemon (10.0.0.1:42764). Jan 20 02:56:15.589718 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:56:15.589841 kernel: audit: type=1130 audit(1768877775.557:878): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.129:22-10.0.0.1:42764 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:56:15.731534 containerd[1640]: time="2026-01-20T02:56:15.730434583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 02:56:15.736937 kubelet[2963]: E0120 02:56:15.735657 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:56:15.751747 kubelet[2963]: E0120 02:56:15.751146 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:56:15.866269 containerd[1640]: time="2026-01-20T02:56:15.864675829Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:56:15.880035 containerd[1640]: time="2026-01-20T02:56:15.878538221Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 02:56:15.880035 containerd[1640]: time="2026-01-20T02:56:15.878637665Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 02:56:15.883158 kubelet[2963]: E0120 02:56:15.882865 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 02:56:15.884032 kubelet[2963]: E0120 02:56:15.882940 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 02:56:15.884032 kubelet[2963]: E0120 02:56:15.883575 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-68fcfd7799-l9qd2_calico-system(ea0ad3c0-ee09-401c-8807-5b06e8d22025): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 02:56:15.889104 containerd[1640]: time="2026-01-20T02:56:15.885941232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 02:56:16.012726 containerd[1640]: time="2026-01-20T02:56:16.012393976Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:56:16.030316 containerd[1640]: time="2026-01-20T02:56:16.029533837Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 02:56:16.030316 containerd[1640]: time="2026-01-20T02:56:16.029660632Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 02:56:16.030562 kubelet[2963]: E0120 02:56:16.029850 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 02:56:16.030562 kubelet[2963]: E0120 02:56:16.029907 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 02:56:16.038607 kubelet[2963]: E0120 02:56:16.033559 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-68fcfd7799-l9qd2_calico-system(ea0ad3c0-ee09-401c-8807-5b06e8d22025): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 02:56:16.038607 kubelet[2963]: E0120 02:56:16.033621 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:56:16.149000 audit[7030]: USER_ACCT pid=7030 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:16.158098 sshd[7030]: Accepted publickey for core from 10.0.0.1 port 42764 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:56:16.173158 sshd-session[7030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:56:16.190096 kernel: audit: type=1101 audit(1768877776.149:879): pid=7030 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:16.164000 audit[7030]: CRED_ACQ pid=7030 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:16.202908 systemd-logind[1612]: New session 24 of user core. Jan 20 02:56:16.277925 kernel: audit: type=1103 audit(1768877776.164:880): pid=7030 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:16.278126 kernel: audit: type=1006 audit(1768877776.164:881): pid=7030 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 20 02:56:16.278173 kernel: audit: type=1300 audit(1768877776.164:881): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7f1e8eb0 a2=3 a3=0 items=0 ppid=1 pid=7030 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:56:16.164000 audit[7030]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7f1e8eb0 a2=3 a3=0 items=0 ppid=1 pid=7030 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:56:16.164000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:56:16.346724 kernel: audit: type=1327 audit(1768877776.164:881): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:56:16.333256 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 02:56:16.401000 audit[7030]: USER_START pid=7030 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:16.486054 kernel: audit: type=1105 audit(1768877776.401:882): pid=7030 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:16.486198 kernel: audit: type=1103 audit(1768877776.420:883): pid=7033 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:16.420000 audit[7033]: CRED_ACQ pid=7033 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:17.520082 sshd[7033]: Connection closed by 10.0.0.1 port 42764 Jan 20 02:56:17.538880 sshd-session[7030]: pam_unix(sshd:session): session closed for user core Jan 20 02:56:17.572000 audit[7030]: USER_END pid=7030 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:17.609701 systemd-logind[1612]: Session 24 logged out. Waiting for processes to exit. Jan 20 02:56:17.631854 systemd[1]: sshd@23-10.0.0.129:22-10.0.0.1:42764.service: Deactivated successfully. Jan 20 02:56:17.637315 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 02:56:17.677082 kernel: audit: type=1106 audit(1768877777.572:884): pid=7030 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:17.677220 kernel: audit: type=1104 audit(1768877777.586:885): pid=7030 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:17.586000 audit[7030]: CRED_DISP pid=7030 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:17.657268 systemd-logind[1612]: Removed session 24. Jan 20 02:56:17.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.129:22-10.0.0.1:42764 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:56:17.720057 kubelet[2963]: E0120 02:56:17.719782 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:56:18.726136 containerd[1640]: time="2026-01-20T02:56:18.725711687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 02:56:18.882626 containerd[1640]: time="2026-01-20T02:56:18.864748388Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:56:18.929316 containerd[1640]: time="2026-01-20T02:56:18.929176668Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 02:56:18.929674 containerd[1640]: time="2026-01-20T02:56:18.929647301Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 02:56:18.938430 kubelet[2963]: E0120 02:56:18.932726 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 02:56:18.950120 kubelet[2963]: E0120 02:56:18.941343 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 02:56:18.952522 kubelet[2963]: E0120 02:56:18.951132 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-zb7gt_calico-system(2beb3373-3a79-403b-953d-80d6dc35b793): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 02:56:18.962643 containerd[1640]: time="2026-01-20T02:56:18.955421586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 02:56:19.065804 containerd[1640]: time="2026-01-20T02:56:19.059707362Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:56:19.070317 containerd[1640]: time="2026-01-20T02:56:19.069448469Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 02:56:19.070317 containerd[1640]: time="2026-01-20T02:56:19.069811833Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 02:56:19.070539 kubelet[2963]: E0120 02:56:19.070317 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 02:56:19.070709 kubelet[2963]: E0120 02:56:19.070553 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 02:56:19.070930 kubelet[2963]: E0120 02:56:19.070777 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-zb7gt_calico-system(2beb3373-3a79-403b-953d-80d6dc35b793): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 02:56:19.071104 kubelet[2963]: E0120 02:56:19.070913 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:56:22.601241 systemd[1]: Started sshd@24-10.0.0.129:22-10.0.0.1:42896.service - OpenSSH per-connection server daemon (10.0.0.1:42896). Jan 20 02:56:22.668452 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:56:22.668664 kernel: audit: type=1130 audit(1768877782.600:887): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.129:22-10.0.0.1:42896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:56:22.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.129:22-10.0.0.1:42896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:56:22.715545 kubelet[2963]: E0120 02:56:22.714635 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:56:22.731206 kubelet[2963]: E0120 02:56:22.729408 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:56:23.069130 sshd[7047]: Accepted publickey for core from 10.0.0.1 port 42896 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:56:23.065767 sshd-session[7047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:56:23.120156 kernel: audit: type=1101 audit(1768877783.060:888): pid=7047 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:23.060000 audit[7047]: USER_ACCT pid=7047 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:23.119034 systemd-logind[1612]: New session 25 of user core. Jan 20 02:56:23.063000 audit[7047]: CRED_ACQ pid=7047 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:23.175559 kernel: audit: type=1103 audit(1768877783.063:889): pid=7047 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:23.229564 kernel: audit: type=1006 audit(1768877783.063:890): pid=7047 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 20 02:56:23.063000 audit[7047]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff4a2f0f0 a2=3 a3=0 items=0 ppid=1 pid=7047 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:56:23.238799 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 20 02:56:23.320132 kernel: audit: type=1300 audit(1768877783.063:890): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff4a2f0f0 a2=3 a3=0 items=0 ppid=1 pid=7047 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:56:23.320276 kernel: audit: type=1327 audit(1768877783.063:890): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:56:23.063000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:56:23.292000 audit[7047]: USER_START pid=7047 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:23.382614 kernel: audit: type=1105 audit(1768877783.292:891): pid=7047 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:23.308000 audit[7050]: CRED_ACQ pid=7050 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:23.437153 kernel: audit: type=1103 audit(1768877783.308:892): pid=7050 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:23.731088 containerd[1640]: time="2026-01-20T02:56:23.730772947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 02:56:23.874602 containerd[1640]: time="2026-01-20T02:56:23.869573464Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:56:23.927018 containerd[1640]: time="2026-01-20T02:56:23.925296058Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 02:56:23.927018 containerd[1640]: time="2026-01-20T02:56:23.925425749Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 02:56:23.927273 kubelet[2963]: E0120 02:56:23.926157 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 02:56:23.927273 kubelet[2963]: E0120 02:56:23.926223 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 02:56:23.927273 kubelet[2963]: E0120 02:56:23.926318 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-554b6967f8-4mv9r_calico-system(9eab50e8-9c7c-4942-9bf1-628e8f6481c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 02:56:23.927273 kubelet[2963]: E0120 02:56:23.926365 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:56:24.051122 sshd[7050]: Connection closed by 10.0.0.1 port 42896 Jan 20 02:56:24.054365 sshd-session[7047]: pam_unix(sshd:session): session closed for user core Jan 20 02:56:24.070000 audit[7047]: USER_END pid=7047 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:24.120048 systemd[1]: sshd@24-10.0.0.129:22-10.0.0.1:42896.service: Deactivated successfully. Jan 20 02:56:24.144907 kernel: audit: type=1106 audit(1768877784.070:893): pid=7047 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:24.148766 systemd[1]: session-25.scope: Deactivated successfully. Jan 20 02:56:24.074000 audit[7047]: CRED_DISP pid=7047 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:24.150867 systemd-logind[1612]: Session 25 logged out. Waiting for processes to exit. Jan 20 02:56:24.201527 systemd-logind[1612]: Removed session 25. Jan 20 02:56:24.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.129:22-10.0.0.1:42896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:56:24.250083 kernel: audit: type=1104 audit(1768877784.074:894): pid=7047 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:27.760751 kubelet[2963]: E0120 02:56:27.760417 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:56:29.166183 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:56:29.166326 kernel: audit: type=1130 audit(1768877789.162:896): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.129:22-10.0.0.1:58862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:56:29.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.129:22-10.0.0.1:58862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:56:29.162370 systemd[1]: Started sshd@25-10.0.0.129:22-10.0.0.1:58862.service - OpenSSH per-connection server daemon (10.0.0.1:58862). Jan 20 02:56:29.732576 sshd[7066]: Accepted publickey for core from 10.0.0.1 port 58862 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:56:29.712000 audit[7066]: USER_ACCT pid=7066 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:29.775408 containerd[1640]: time="2026-01-20T02:56:29.775352838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 02:56:29.845109 kernel: audit: type=1101 audit(1768877789.712:897): pid=7066 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:29.849000 audit[7066]: CRED_ACQ pid=7066 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:29.863006 sshd-session[7066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:56:29.923600 kernel: audit: type=1103 audit(1768877789.849:898): pid=7066 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:30.066349 kernel: audit: type=1006 audit(1768877789.849:899): pid=7066 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 20 02:56:30.066460 kernel: audit: type=1300 audit(1768877789.849:899): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe33864550 a2=3 a3=0 items=0 ppid=1 pid=7066 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:56:29.849000 audit[7066]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe33864550 a2=3 a3=0 items=0 ppid=1 pid=7066 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:56:30.066818 kubelet[2963]: E0120 02:56:29.977598 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 02:56:30.066818 kubelet[2963]: E0120 02:56:29.977651 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 02:56:30.066818 kubelet[2963]: E0120 02:56:29.977742 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-5hks8_calico-system(2048147f-559b-4756-8896-b644ce0ae95e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 02:56:30.066818 kubelet[2963]: E0120 02:56:29.977787 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:56:30.067462 containerd[1640]: time="2026-01-20T02:56:29.967438344Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:56:30.067462 containerd[1640]: time="2026-01-20T02:56:29.977281120Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 02:56:30.067462 containerd[1640]: time="2026-01-20T02:56:29.977381357Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 02:56:30.081878 systemd-logind[1612]: New session 26 of user core. Jan 20 02:56:29.849000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:56:30.132564 kernel: audit: type=1327 audit(1768877789.849:899): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:56:30.167013 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 20 02:56:30.340066 kernel: audit: type=1105 audit(1768877790.268:900): pid=7066 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:30.268000 audit[7066]: USER_START pid=7066 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:30.430756 kernel: audit: type=1103 audit(1768877790.392:901): pid=7087 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:30.392000 audit[7087]: CRED_ACQ pid=7087 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:30.746450 containerd[1640]: time="2026-01-20T02:56:30.746305394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:56:30.927805 containerd[1640]: time="2026-01-20T02:56:30.915777065Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:56:30.963852 containerd[1640]: time="2026-01-20T02:56:30.949254407Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:56:30.963852 containerd[1640]: time="2026-01-20T02:56:30.949385008Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:56:30.964381 kubelet[2963]: E0120 02:56:30.964281 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:56:30.964739 kubelet[2963]: E0120 02:56:30.964614 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:56:30.973418 kubelet[2963]: E0120 02:56:30.973180 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-99b79f8fd-9fwc6_calico-apiserver(67615726-cef8-44da-a26c-7795f613fcbb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:56:30.973418 kubelet[2963]: E0120 02:56:30.973243 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:56:31.912060 sshd[7087]: Connection closed by 10.0.0.1 port 58862 Jan 20 02:56:31.928178 sshd-session[7066]: pam_unix(sshd:session): session closed for user core Jan 20 02:56:31.963000 audit[7066]: USER_END pid=7066 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:32.063752 systemd[1]: sshd@25-10.0.0.129:22-10.0.0.1:58862.service: Deactivated successfully. Jan 20 02:56:32.078140 kernel: audit: type=1106 audit(1768877791.963:902): pid=7066 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:32.083359 systemd[1]: session-26.scope: Deactivated successfully. Jan 20 02:56:31.963000 audit[7066]: CRED_DISP pid=7066 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:32.103378 systemd-logind[1612]: Session 26 logged out. Waiting for processes to exit. Jan 20 02:56:32.151433 kernel: audit: type=1104 audit(1768877791.963:903): pid=7066 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:32.138301 systemd-logind[1612]: Removed session 26. Jan 20 02:56:32.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.129:22-10.0.0.1:58862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:56:34.776220 kubelet[2963]: E0120 02:56:34.767712 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:56:34.792652 kubelet[2963]: E0120 02:56:34.792572 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:56:36.512807 containerd[1640]: time="2026-01-20T02:56:36.512723492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:56:36.629537 containerd[1640]: time="2026-01-20T02:56:36.629251174Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:56:36.637221 containerd[1640]: time="2026-01-20T02:56:36.637166016Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:56:36.637588 containerd[1640]: time="2026-01-20T02:56:36.637397115Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:56:36.640994 kubelet[2963]: E0120 02:56:36.640699 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:56:36.640994 kubelet[2963]: E0120 02:56:36.640957 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:56:36.656680 kubelet[2963]: E0120 02:56:36.652806 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-99b79f8fd-h8mhs_calico-apiserver(78de0405-4f44-497e-8007-519223ee3a61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:56:36.656680 kubelet[2963]: E0120 02:56:36.656626 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:56:37.091605 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:56:37.091777 kernel: audit: type=1130 audit(1768877797.028:905): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.129:22-10.0.0.1:37112 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:56:37.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.129:22-10.0.0.1:37112 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:56:37.032126 systemd[1]: Started sshd@26-10.0.0.129:22-10.0.0.1:37112.service - OpenSSH per-connection server daemon (10.0.0.1:37112). Jan 20 02:56:37.843000 audit[7114]: USER_ACCT pid=7114 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:37.881123 sshd[7114]: Accepted publickey for core from 10.0.0.1 port 37112 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:56:37.909662 sshd-session[7114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:56:37.891000 audit[7114]: CRED_ACQ pid=7114 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:38.011460 systemd-logind[1612]: New session 27 of user core. Jan 20 02:56:38.035076 kernel: audit: type=1101 audit(1768877797.843:906): pid=7114 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:38.035223 kernel: audit: type=1103 audit(1768877797.891:907): pid=7114 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:38.035262 kernel: audit: type=1006 audit(1768877797.891:908): pid=7114 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 20 02:56:37.891000 audit[7114]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7003e560 a2=3 a3=0 items=0 ppid=1 pid=7114 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:56:38.139137 kernel: audit: type=1300 audit(1768877797.891:908): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7003e560 a2=3 a3=0 items=0 ppid=1 pid=7114 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:56:38.139320 kernel: audit: type=1327 audit(1768877797.891:908): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:56:37.891000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:56:38.144837 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 20 02:56:38.183000 audit[7114]: USER_START pid=7114 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:38.233805 kernel: audit: type=1105 audit(1768877798.183:909): pid=7114 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:38.233968 kernel: audit: type=1103 audit(1768877798.214:910): pid=7119 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:38.214000 audit[7119]: CRED_ACQ pid=7119 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:39.745365 kubelet[2963]: E0120 02:56:39.745004 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:56:39.780953 sshd[7119]: Connection closed by 10.0.0.1 port 37112 Jan 20 02:56:39.778162 sshd-session[7114]: pam_unix(sshd:session): session closed for user core Jan 20 02:56:39.808000 audit[7114]: USER_END pid=7114 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:39.854312 systemd[1]: sshd@26-10.0.0.129:22-10.0.0.1:37112.service: Deactivated successfully. Jan 20 02:56:39.873467 systemd[1]: session-27.scope: Deactivated successfully. Jan 20 02:56:39.921106 kernel: audit: type=1106 audit(1768877799.808:911): pid=7114 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:39.921226 kernel: audit: type=1104 audit(1768877799.808:912): pid=7114 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:39.808000 audit[7114]: CRED_DISP pid=7114 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:39.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.129:22-10.0.0.1:37112 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:56:39.936522 systemd-logind[1612]: Session 27 logged out. Waiting for processes to exit. Jan 20 02:56:39.965201 systemd-logind[1612]: Removed session 27. Jan 20 02:56:41.725003 kubelet[2963]: E0120 02:56:41.721392 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:56:41.742429 kubelet[2963]: E0120 02:56:41.734759 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:56:43.739531 kubelet[2963]: E0120 02:56:43.738302 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:56:44.870696 systemd[1]: Started sshd@27-10.0.0.129:22-10.0.0.1:54998.service - OpenSSH per-connection server daemon (10.0.0.1:54998). Jan 20 02:56:44.927622 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:56:44.927771 kernel: audit: type=1130 audit(1768877804.868:914): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.129:22-10.0.0.1:54998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:56:44.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.129:22-10.0.0.1:54998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:56:45.203774 sshd[7134]: Accepted publickey for core from 10.0.0.1 port 54998 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:56:45.203000 audit[7134]: USER_ACCT pid=7134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:45.266591 systemd-logind[1612]: New session 28 of user core. Jan 20 02:56:45.240630 sshd-session[7134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:56:45.238000 audit[7134]: CRED_ACQ pid=7134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:45.285000 kernel: audit: type=1101 audit(1768877805.203:915): pid=7134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:45.285088 kernel: audit: type=1103 audit(1768877805.238:916): pid=7134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:45.304705 kernel: audit: type=1006 audit(1768877805.238:917): pid=7134 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 20 02:56:45.238000 audit[7134]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff40af4bf0 a2=3 a3=0 items=0 ppid=1 pid=7134 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:56:45.319851 kernel: audit: type=1300 audit(1768877805.238:917): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff40af4bf0 a2=3 a3=0 items=0 ppid=1 pid=7134 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:56:45.360999 kernel: audit: type=1327 audit(1768877805.238:917): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:56:45.238000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:56:45.364442 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 20 02:56:45.373000 audit[7134]: USER_START pid=7134 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:45.393172 kernel: audit: type=1105 audit(1768877805.373:918): pid=7134 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:45.394000 audit[7137]: CRED_ACQ pid=7137 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:45.420904 kernel: audit: type=1103 audit(1768877805.394:919): pid=7137 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:46.218944 sshd[7137]: Connection closed by 10.0.0.1 port 54998 Jan 20 02:56:46.217787 sshd-session[7134]: pam_unix(sshd:session): session closed for user core Jan 20 02:56:46.241642 systemd[1]: sshd@27-10.0.0.129:22-10.0.0.1:54998.service: Deactivated successfully. Jan 20 02:56:46.236000 audit[7134]: USER_END pid=7134 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:46.276097 systemd[1]: session-28.scope: Deactivated successfully. Jan 20 02:56:46.382332 kernel: audit: type=1106 audit(1768877806.236:920): pid=7134 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:46.382550 kernel: audit: type=1104 audit(1768877806.236:921): pid=7134 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:46.236000 audit[7134]: CRED_DISP pid=7134 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:46.321958 systemd-logind[1612]: Session 28 logged out. Waiting for processes to exit. Jan 20 02:56:46.341993 systemd-logind[1612]: Removed session 28. Jan 20 02:56:46.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.129:22-10.0.0.1:54998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:56:49.787687 kubelet[2963]: E0120 02:56:49.782151 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:56:49.813392 kubelet[2963]: E0120 02:56:49.802658 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:56:49.828222 kubelet[2963]: E0120 02:56:49.820672 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:56:51.303657 systemd[1]: Started sshd@28-10.0.0.129:22-10.0.0.1:55132.service - OpenSSH per-connection server daemon (10.0.0.1:55132). Jan 20 02:56:51.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.129:22-10.0.0.1:55132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:56:51.400312 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:56:51.400452 kernel: audit: type=1130 audit(1768877811.304:923): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.129:22-10.0.0.1:55132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:56:51.832000 audit[7152]: USER_ACCT pid=7152 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:51.842611 sshd-session[7152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:56:51.865232 sshd[7152]: Accepted publickey for core from 10.0.0.1 port 55132 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:56:51.841000 audit[7152]: CRED_ACQ pid=7152 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:51.902610 systemd-logind[1612]: New session 29 of user core. Jan 20 02:56:51.960995 kernel: audit: type=1101 audit(1768877811.832:924): pid=7152 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:51.961156 kernel: audit: type=1103 audit(1768877811.841:925): pid=7152 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:51.961198 kernel: audit: type=1006 audit(1768877811.841:926): pid=7152 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jan 20 02:56:51.841000 audit[7152]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe70eefc00 a2=3 a3=0 items=0 ppid=1 pid=7152 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:56:52.037814 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 20 02:56:51.841000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:56:52.091671 kernel: audit: type=1300 audit(1768877811.841:926): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe70eefc00 a2=3 a3=0 items=0 ppid=1 pid=7152 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:56:52.092027 kernel: audit: type=1327 audit(1768877811.841:926): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:56:52.103099 kernel: audit: type=1105 audit(1768877812.072:927): pid=7152 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:52.072000 audit[7152]: USER_START pid=7152 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:52.092000 audit[7155]: CRED_ACQ pid=7155 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:52.214383 kernel: audit: type=1103 audit(1768877812.092:928): pid=7155 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:52.872215 sshd[7155]: Connection closed by 10.0.0.1 port 55132 Jan 20 02:56:52.874761 sshd-session[7152]: pam_unix(sshd:session): session closed for user core Jan 20 02:56:52.886000 audit[7152]: USER_END pid=7152 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:52.900577 systemd[1]: sshd@28-10.0.0.129:22-10.0.0.1:55132.service: Deactivated successfully. Jan 20 02:56:52.936691 systemd[1]: session-29.scope: Deactivated successfully. Jan 20 02:56:52.944356 systemd-logind[1612]: Session 29 logged out. Waiting for processes to exit. Jan 20 02:56:52.953406 kernel: audit: type=1106 audit(1768877812.886:929): pid=7152 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:52.956130 kernel: audit: type=1104 audit(1768877812.886:930): pid=7152 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:52.886000 audit[7152]: CRED_DISP pid=7152 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:52.966071 systemd-logind[1612]: Removed session 29. Jan 20 02:56:52.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.129:22-10.0.0.1:55132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:56:53.743990 kubelet[2963]: E0120 02:56:53.743650 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:56:54.723545 kubelet[2963]: E0120 02:56:54.723211 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:56:54.728297 kubelet[2963]: E0120 02:56:54.728144 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:56:55.736280 kubelet[2963]: E0120 02:56:55.735908 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:56:57.836364 kubelet[2963]: E0120 02:56:57.816124 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:56:58.007143 systemd[1]: Started sshd@29-10.0.0.129:22-10.0.0.1:48724.service - OpenSSH per-connection server daemon (10.0.0.1:48724). Jan 20 02:56:58.020980 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:56:58.021029 kernel: audit: type=1130 audit(1768877818.006:932): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.129:22-10.0.0.1:48724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:56:58.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.129:22-10.0.0.1:48724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:56:58.546368 sshd[7183]: Accepted publickey for core from 10.0.0.1 port 48724 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:56:58.530000 audit[7183]: USER_ACCT pid=7183 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:58.631881 kernel: audit: type=1101 audit(1768877818.530:933): pid=7183 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:58.530000 audit[7183]: CRED_ACQ pid=7183 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:58.681702 sshd-session[7183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:56:58.693955 kernel: audit: type=1103 audit(1768877818.530:934): pid=7183 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:58.530000 audit[7183]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff2a9390e0 a2=3 a3=0 items=0 ppid=1 pid=7183 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:56:58.745926 kubelet[2963]: E0120 02:56:58.719467 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:56:58.814093 systemd-logind[1612]: New session 30 of user core. Jan 20 02:56:58.837074 kernel: audit: type=1006 audit(1768877818.530:935): pid=7183 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Jan 20 02:56:58.837123 kernel: audit: type=1300 audit(1768877818.530:935): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff2a9390e0 a2=3 a3=0 items=0 ppid=1 pid=7183 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:56:58.837159 kernel: audit: type=1327 audit(1768877818.530:935): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:56:58.530000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:56:58.865154 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 20 02:56:58.891000 audit[7183]: USER_START pid=7183 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:58.963116 kernel: audit: type=1105 audit(1768877818.891:936): pid=7183 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:58.905000 audit[7186]: CRED_ACQ pid=7186 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:59.049984 kernel: audit: type=1103 audit(1768877818.905:937): pid=7186 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:56:59.757723 kubelet[2963]: E0120 02:56:59.757582 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:57:00.000585 sshd[7186]: Connection closed by 10.0.0.1 port 48724 Jan 20 02:57:00.001611 sshd-session[7183]: pam_unix(sshd:session): session closed for user core Jan 20 02:57:00.010000 audit[7183]: USER_END pid=7183 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:00.076592 kernel: audit: type=1106 audit(1768877820.010:938): pid=7183 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:00.076719 kernel: audit: type=1104 audit(1768877820.010:939): pid=7183 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:00.010000 audit[7183]: CRED_DISP pid=7183 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:00.020373 systemd[1]: sshd@29-10.0.0.129:22-10.0.0.1:48724.service: Deactivated successfully. Jan 20 02:57:00.059417 systemd[1]: session-30.scope: Deactivated successfully. Jan 20 02:57:00.074834 systemd-logind[1612]: Session 30 logged out. Waiting for processes to exit. Jan 20 02:57:00.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.129:22-10.0.0.1:48724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:57:00.121966 systemd-logind[1612]: Removed session 30. Jan 20 02:57:00.750915 kubelet[2963]: E0120 02:57:00.736340 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:57:00.761849 kubelet[2963]: E0120 02:57:00.757132 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:57:04.725900 kubelet[2963]: E0120 02:57:04.725522 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:57:05.052557 systemd[1]: Started sshd@30-10.0.0.129:22-10.0.0.1:33634.service - OpenSSH per-connection server daemon (10.0.0.1:33634). Jan 20 02:57:05.090571 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:57:05.090767 kernel: audit: type=1130 audit(1768877825.051:941): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.129:22-10.0.0.1:33634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:57:05.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.129:22-10.0.0.1:33634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:57:05.599000 audit[7233]: USER_ACCT pid=7233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:05.695797 kernel: audit: type=1101 audit(1768877825.599:942): pid=7233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:05.628837 sshd-session[7233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:57:05.696324 sshd[7233]: Accepted publickey for core from 10.0.0.1 port 33634 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:57:05.624000 audit[7233]: CRED_ACQ pid=7233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:05.722905 kubelet[2963]: E0120 02:57:05.719877 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:57:05.726548 systemd-logind[1612]: New session 31 of user core. Jan 20 02:57:05.821127 kernel: audit: type=1103 audit(1768877825.624:943): pid=7233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:05.821302 kernel: audit: type=1006 audit(1768877825.624:944): pid=7233 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=31 res=1 Jan 20 02:57:05.821538 kernel: audit: type=1300 audit(1768877825.624:944): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdcc226a40 a2=3 a3=0 items=0 ppid=1 pid=7233 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:57:05.624000 audit[7233]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdcc226a40 a2=3 a3=0 items=0 ppid=1 pid=7233 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:57:05.829108 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 20 02:57:05.624000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:57:05.929542 kernel: audit: type=1327 audit(1768877825.624:944): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:57:05.855000 audit[7233]: USER_START pid=7233 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:05.885000 audit[7236]: CRED_ACQ pid=7236 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:06.064541 kernel: audit: type=1105 audit(1768877825.855:945): pid=7233 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:06.064740 kernel: audit: type=1103 audit(1768877825.885:946): pid=7236 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:06.869217 sshd[7236]: Connection closed by 10.0.0.1 port 33634 Jan 20 02:57:06.871274 sshd-session[7233]: pam_unix(sshd:session): session closed for user core Jan 20 02:57:06.883000 audit[7233]: USER_END pid=7233 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:06.962576 kernel: audit: type=1106 audit(1768877826.883:947): pid=7233 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:06.883000 audit[7233]: CRED_DISP pid=7233 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:06.963959 systemd-logind[1612]: Session 31 logged out. Waiting for processes to exit. Jan 20 02:57:06.983431 systemd[1]: sshd@30-10.0.0.129:22-10.0.0.1:33634.service: Deactivated successfully. Jan 20 02:57:07.019316 systemd[1]: session-31.scope: Deactivated successfully. Jan 20 02:57:07.050635 kernel: audit: type=1104 audit(1768877826.883:948): pid=7233 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:06.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.129:22-10.0.0.1:33634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:57:07.063013 systemd-logind[1612]: Removed session 31. Jan 20 02:57:08.732182 kubelet[2963]: E0120 02:57:08.732127 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:57:11.745848 kubelet[2963]: E0120 02:57:11.745788 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:57:11.929244 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:57:11.929370 kernel: audit: type=1130 audit(1768877831.919:950): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.129:22-10.0.0.1:33658 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:57:11.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.129:22-10.0.0.1:33658 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:57:11.922176 systemd[1]: Started sshd@31-10.0.0.129:22-10.0.0.1:33658.service - OpenSSH per-connection server daemon (10.0.0.1:33658). Jan 20 02:57:12.319000 audit[7259]: USER_ACCT pid=7259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:12.385313 sshd[7259]: Accepted publickey for core from 10.0.0.1 port 33658 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:57:12.404174 sshd-session[7259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:57:12.436562 kernel: audit: type=1101 audit(1768877832.319:951): pid=7259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:12.387000 audit[7259]: CRED_ACQ pid=7259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:12.501850 systemd-logind[1612]: New session 32 of user core. Jan 20 02:57:12.534066 kernel: audit: type=1103 audit(1768877832.387:952): pid=7259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:12.534294 kernel: audit: type=1006 audit(1768877832.392:953): pid=7259 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=32 res=1 Jan 20 02:57:12.539061 kernel: audit: type=1300 audit(1768877832.392:953): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc94e34d90 a2=3 a3=0 items=0 ppid=1 pid=7259 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:57:12.392000 audit[7259]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc94e34d90 a2=3 a3=0 items=0 ppid=1 pid=7259 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:57:12.392000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:57:12.602913 kernel: audit: type=1327 audit(1768877832.392:953): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:57:12.602050 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 20 02:57:12.654000 audit[7259]: USER_START pid=7259 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:12.718772 kernel: audit: type=1105 audit(1768877832.654:954): pid=7259 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:12.719222 kubelet[2963]: E0120 02:57:12.719172 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:57:12.678000 audit[7262]: CRED_ACQ pid=7262 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:12.770167 kernel: audit: type=1103 audit(1768877832.678:955): pid=7262 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:13.713840 sshd[7262]: Connection closed by 10.0.0.1 port 33658 Jan 20 02:57:13.716069 sshd-session[7259]: pam_unix(sshd:session): session closed for user core Jan 20 02:57:13.744000 audit[7259]: USER_END pid=7259 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:13.787434 systemd[1]: sshd@31-10.0.0.129:22-10.0.0.1:33658.service: Deactivated successfully. Jan 20 02:57:13.807992 systemd[1]: session-32.scope: Deactivated successfully. Jan 20 02:57:13.821932 systemd-logind[1612]: Session 32 logged out. Waiting for processes to exit. Jan 20 02:57:13.868185 kernel: audit: type=1106 audit(1768877833.744:956): pid=7259 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:13.868334 kernel: audit: type=1104 audit(1768877833.744:957): pid=7259 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:13.744000 audit[7259]: CRED_DISP pid=7259 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:13.866210 systemd[1]: Started sshd@32-10.0.0.129:22-10.0.0.1:33666.service - OpenSSH per-connection server daemon (10.0.0.1:33666). Jan 20 02:57:13.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.129:22-10.0.0.1:33658 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:57:13.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.129:22-10.0.0.1:33666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:57:13.900653 systemd-logind[1612]: Removed session 32. Jan 20 02:57:14.418000 audit[7277]: USER_ACCT pid=7277 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:14.435804 sshd[7277]: Accepted publickey for core from 10.0.0.1 port 33666 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:57:14.450000 audit[7277]: CRED_ACQ pid=7277 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:14.450000 audit[7277]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe07dfaa30 a2=3 a3=0 items=0 ppid=1 pid=7277 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:57:14.450000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:57:14.457152 sshd-session[7277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:57:14.505340 systemd-logind[1612]: New session 33 of user core. Jan 20 02:57:14.525024 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 20 02:57:14.542000 audit[7277]: USER_START pid=7277 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:14.588000 audit[7280]: CRED_ACQ pid=7280 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:15.802812 kubelet[2963]: E0120 02:57:15.801343 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:57:15.983045 sshd[7280]: Connection closed by 10.0.0.1 port 33666 Jan 20 02:57:15.982123 sshd-session[7277]: pam_unix(sshd:session): session closed for user core Jan 20 02:57:15.989000 audit[7277]: USER_END pid=7277 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:15.989000 audit[7277]: CRED_DISP pid=7277 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:16.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.129:22-10.0.0.1:54280 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:57:16.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.129:22-10.0.0.1:33666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:57:16.009580 systemd[1]: Started sshd@33-10.0.0.129:22-10.0.0.1:54280.service - OpenSSH per-connection server daemon (10.0.0.1:54280). Jan 20 02:57:16.012438 systemd[1]: sshd@32-10.0.0.129:22-10.0.0.1:33666.service: Deactivated successfully. Jan 20 02:57:16.021195 systemd[1]: session-33.scope: Deactivated successfully. Jan 20 02:57:16.036149 systemd-logind[1612]: Session 33 logged out. Waiting for processes to exit. Jan 20 02:57:16.065307 systemd-logind[1612]: Removed session 33. Jan 20 02:57:16.293000 audit[7288]: USER_ACCT pid=7288 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:16.304371 sshd[7288]: Accepted publickey for core from 10.0.0.1 port 54280 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:57:16.310000 audit[7288]: CRED_ACQ pid=7288 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:16.316000 audit[7288]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde78f5080 a2=3 a3=0 items=0 ppid=1 pid=7288 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:57:16.316000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:57:16.318584 sshd-session[7288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:57:16.363170 systemd-logind[1612]: New session 34 of user core. Jan 20 02:57:16.402890 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 20 02:57:16.429000 audit[7288]: USER_START pid=7288 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:16.438000 audit[7294]: CRED_ACQ pid=7294 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:16.748726 kubelet[2963]: E0120 02:57:16.733765 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:57:17.421606 sshd[7294]: Connection closed by 10.0.0.1 port 54280 Jan 20 02:57:17.433748 sshd-session[7288]: pam_unix(sshd:session): session closed for user core Jan 20 02:57:17.442000 audit[7288]: USER_END pid=7288 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:17.494923 kernel: kauditd_printk_skb: 20 callbacks suppressed Jan 20 02:57:17.495046 kernel: audit: type=1106 audit(1768877837.442:974): pid=7288 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:17.495325 systemd[1]: sshd@33-10.0.0.129:22-10.0.0.1:54280.service: Deactivated successfully. Jan 20 02:57:17.502972 systemd[1]: session-34.scope: Deactivated successfully. Jan 20 02:57:17.529575 systemd-logind[1612]: Session 34 logged out. Waiting for processes to exit. Jan 20 02:57:17.442000 audit[7288]: CRED_DISP pid=7288 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:17.576969 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Jan 20 02:57:17.582311 systemd-logind[1612]: Removed session 34. Jan 20 02:57:17.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.129:22-10.0.0.1:54280 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:57:17.638193 kernel: audit: type=1104 audit(1768877837.442:975): pid=7288 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:17.638312 kernel: audit: type=1131 audit(1768877837.495:976): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.129:22-10.0.0.1:54280 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:57:17.828408 systemd-tmpfiles[7306]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 02:57:17.828556 systemd-tmpfiles[7306]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 02:57:17.829117 systemd-tmpfiles[7306]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 02:57:17.860615 systemd-tmpfiles[7306]: ACLs are not supported, ignoring. Jan 20 02:57:17.860785 systemd-tmpfiles[7306]: ACLs are not supported, ignoring. Jan 20 02:57:17.886191 systemd-tmpfiles[7306]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 02:57:17.886210 systemd-tmpfiles[7306]: Skipping /boot Jan 20 02:57:17.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-clean comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:57:17.956095 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Jan 20 02:57:17.956611 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Jan 20 02:57:18.020357 kernel: audit: type=1130 audit(1768877837.961:977): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-clean comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:57:18.020453 kernel: audit: type=1131 audit(1768877837.978:978): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-clean comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:57:17.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-clean comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:57:18.734371 kubelet[2963]: E0120 02:57:18.730415 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:57:20.764049 kubelet[2963]: E0120 02:57:20.762407 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:57:22.569806 systemd[1]: Started sshd@34-10.0.0.129:22-10.0.0.1:54308.service - OpenSSH per-connection server daemon (10.0.0.1:54308). Jan 20 02:57:22.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.129:22-10.0.0.1:54308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:57:22.632744 kernel: audit: type=1130 audit(1768877842.564:979): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.129:22-10.0.0.1:54308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:57:23.141211 kernel: audit: type=1101 audit(1768877843.052:980): pid=7310 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:23.052000 audit[7310]: USER_ACCT pid=7310 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:23.066694 sshd-session[7310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:57:23.142073 sshd[7310]: Accepted publickey for core from 10.0.0.1 port 54308 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:57:23.054000 audit[7310]: CRED_ACQ pid=7310 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:23.209666 systemd-logind[1612]: New session 35 of user core. Jan 20 02:57:23.260110 kernel: audit: type=1103 audit(1768877843.054:981): pid=7310 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:23.260575 kernel: audit: type=1006 audit(1768877843.054:982): pid=7310 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=35 res=1 Jan 20 02:57:23.262701 kernel: audit: type=1300 audit(1768877843.054:982): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff46381890 a2=3 a3=0 items=0 ppid=1 pid=7310 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:57:23.054000 audit[7310]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff46381890 a2=3 a3=0 items=0 ppid=1 pid=7310 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:57:23.333878 kernel: audit: type=1327 audit(1768877843.054:982): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:57:23.054000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:57:23.337124 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 20 02:57:23.401000 audit[7310]: USER_START pid=7310 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:23.453000 audit[7313]: CRED_ACQ pid=7313 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:23.569736 kernel: audit: type=1105 audit(1768877843.401:983): pid=7310 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:23.569882 kernel: audit: type=1103 audit(1768877843.453:984): pid=7313 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:23.754470 kubelet[2963]: E0120 02:57:23.754014 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:57:24.664059 sshd[7313]: Connection closed by 10.0.0.1 port 54308 Jan 20 02:57:24.663577 sshd-session[7310]: pam_unix(sshd:session): session closed for user core Jan 20 02:57:24.685000 audit[7310]: USER_END pid=7310 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:24.704953 systemd[1]: sshd@34-10.0.0.129:22-10.0.0.1:54308.service: Deactivated successfully. Jan 20 02:57:24.730010 systemd[1]: session-35.scope: Deactivated successfully. Jan 20 02:57:24.777354 systemd-logind[1612]: Session 35 logged out. Waiting for processes to exit. Jan 20 02:57:24.685000 audit[7310]: CRED_DISP pid=7310 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:24.791298 systemd-logind[1612]: Removed session 35. Jan 20 02:57:24.865114 kernel: audit: type=1106 audit(1768877844.685:985): pid=7310 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:24.865841 kernel: audit: type=1104 audit(1768877844.685:986): pid=7310 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:24.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.129:22-10.0.0.1:54308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:57:26.733819 kubelet[2963]: E0120 02:57:26.732407 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:57:26.741775 kubelet[2963]: E0120 02:57:26.740402 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:57:28.735104 kubelet[2963]: E0120 02:57:28.734741 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:57:29.804035 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:57:29.804674 kernel: audit: type=1130 audit(1768877849.768:988): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.129:22-10.0.0.1:47652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:57:29.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.129:22-10.0.0.1:47652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:57:29.769982 systemd[1]: Started sshd@35-10.0.0.129:22-10.0.0.1:47652.service - OpenSSH per-connection server daemon (10.0.0.1:47652). Jan 20 02:57:29.810574 kubelet[2963]: E0120 02:57:29.807850 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:57:29.814546 kubelet[2963]: E0120 02:57:29.814437 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:57:30.310000 audit[7326]: USER_ACCT pid=7326 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:30.400288 kernel: audit: type=1101 audit(1768877850.310:989): pid=7326 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:30.411738 sshd[7326]: Accepted publickey for core from 10.0.0.1 port 47652 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:57:30.414000 audit[7326]: CRED_ACQ pid=7326 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:30.454016 sshd-session[7326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:57:30.563854 kernel: audit: type=1103 audit(1768877850.414:990): pid=7326 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:30.563974 kernel: audit: type=1006 audit(1768877850.445:991): pid=7326 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=36 res=1 Jan 20 02:57:30.564013 kernel: audit: type=1300 audit(1768877850.445:991): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe72db1ec0 a2=3 a3=0 items=0 ppid=1 pid=7326 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=36 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:57:30.445000 audit[7326]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe72db1ec0 a2=3 a3=0 items=0 ppid=1 pid=7326 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=36 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:57:30.561553 systemd-logind[1612]: New session 36 of user core. Jan 20 02:57:30.614826 kernel: audit: type=1327 audit(1768877850.445:991): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:57:30.445000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:57:30.677741 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 20 02:57:30.723000 audit[7326]: USER_START pid=7326 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:30.731000 audit[7352]: CRED_ACQ pid=7352 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:30.873724 kernel: audit: type=1105 audit(1768877850.723:992): pid=7326 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:30.873842 kernel: audit: type=1103 audit(1768877850.731:993): pid=7352 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:32.033938 sshd[7352]: Connection closed by 10.0.0.1 port 47652 Jan 20 02:57:32.036357 sshd-session[7326]: pam_unix(sshd:session): session closed for user core Jan 20 02:57:32.102000 audit[7326]: USER_END pid=7326 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:32.129825 systemd[1]: sshd@35-10.0.0.129:22-10.0.0.1:47652.service: Deactivated successfully. Jan 20 02:57:32.168916 systemd[1]: session-36.scope: Deactivated successfully. Jan 20 02:57:32.207696 systemd-logind[1612]: Session 36 logged out. Waiting for processes to exit. Jan 20 02:57:32.229987 systemd-logind[1612]: Removed session 36. Jan 20 02:57:32.239551 kernel: audit: type=1106 audit(1768877852.102:994): pid=7326 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:32.239711 kernel: audit: type=1104 audit(1768877852.102:995): pid=7326 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:32.102000 audit[7326]: CRED_DISP pid=7326 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:32.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.129:22-10.0.0.1:47652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:57:32.719011 kubelet[2963]: E0120 02:57:32.715673 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:57:32.719011 kubelet[2963]: E0120 02:57:32.717714 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:57:35.717069 kubelet[2963]: E0120 02:57:35.716853 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:57:35.730586 kubelet[2963]: E0120 02:57:35.728263 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:57:37.087194 systemd[1]: Started sshd@36-10.0.0.129:22-10.0.0.1:59030.service - OpenSSH per-connection server daemon (10.0.0.1:59030). Jan 20 02:57:37.165998 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:57:37.166149 kernel: audit: type=1130 audit(1768877857.093:997): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.129:22-10.0.0.1:59030 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:57:37.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.129:22-10.0.0.1:59030 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:57:37.605790 sshd[7368]: Accepted publickey for core from 10.0.0.1 port 59030 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:57:37.603000 audit[7368]: USER_ACCT pid=7368 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:37.606456 sshd-session[7368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:57:37.605000 audit[7368]: CRED_ACQ pid=7368 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:37.711594 kernel: audit: type=1101 audit(1768877857.603:998): pid=7368 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:37.711721 kernel: audit: type=1103 audit(1768877857.605:999): pid=7368 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:37.711778 kernel: audit: type=1006 audit(1768877857.605:1000): pid=7368 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=37 res=1 Jan 20 02:57:37.740771 systemd-logind[1612]: New session 37 of user core. Jan 20 02:57:37.759876 kernel: audit: type=1300 audit(1768877857.605:1000): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe2ae3880 a2=3 a3=0 items=0 ppid=1 pid=7368 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=37 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:57:37.605000 audit[7368]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe2ae3880 a2=3 a3=0 items=0 ppid=1 pid=7368 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=37 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:57:37.824228 kernel: audit: type=1327 audit(1768877857.605:1000): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:57:37.605000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:57:37.841674 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 20 02:57:37.874000 audit[7368]: USER_START pid=7368 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:38.067236 kernel: audit: type=1105 audit(1768877857.874:1001): pid=7368 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:38.067427 kernel: audit: type=1103 audit(1768877857.913:1002): pid=7371 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:37.913000 audit[7371]: CRED_ACQ pid=7371 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:39.556123 sshd[7371]: Connection closed by 10.0.0.1 port 59030 Jan 20 02:57:39.576613 sshd-session[7368]: pam_unix(sshd:session): session closed for user core Jan 20 02:57:39.598000 audit[7368]: USER_END pid=7368 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:39.682271 kernel: audit: type=1106 audit(1768877859.598:1003): pid=7368 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:39.679748 systemd[1]: sshd@36-10.0.0.129:22-10.0.0.1:59030.service: Deactivated successfully. Jan 20 02:57:39.598000 audit[7368]: CRED_DISP pid=7368 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:39.712148 systemd[1]: session-37.scope: Deactivated successfully. Jan 20 02:57:39.763425 systemd-logind[1612]: Session 37 logged out. Waiting for processes to exit. Jan 20 02:57:39.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.129:22-10.0.0.1:59030 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:57:39.789766 kernel: audit: type=1104 audit(1768877859.598:1004): pid=7368 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:39.790839 systemd-logind[1612]: Removed session 37. Jan 20 02:57:41.792899 kubelet[2963]: E0120 02:57:41.780580 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:57:41.792899 kubelet[2963]: E0120 02:57:41.781041 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:57:43.746776 kubelet[2963]: E0120 02:57:43.739283 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:57:44.692202 systemd[1]: Started sshd@37-10.0.0.129:22-10.0.0.1:41608.service - OpenSSH per-connection server daemon (10.0.0.1:41608). Jan 20 02:57:44.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-10.0.0.129:22-10.0.0.1:41608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:57:44.720562 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:57:44.720634 kernel: audit: type=1130 audit(1768877864.691:1006): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-10.0.0.129:22-10.0.0.1:41608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:57:44.744854 kubelet[2963]: E0120 02:57:44.744277 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:57:45.113439 sshd[7386]: Accepted publickey for core from 10.0.0.1 port 41608 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:57:45.112000 audit[7386]: USER_ACCT pid=7386 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:45.136403 sshd-session[7386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:57:45.131000 audit[7386]: CRED_ACQ pid=7386 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:45.184672 systemd-logind[1612]: New session 38 of user core. Jan 20 02:57:45.195775 kernel: audit: type=1101 audit(1768877865.112:1007): pid=7386 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:45.195854 kernel: audit: type=1103 audit(1768877865.131:1008): pid=7386 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:45.195879 kernel: audit: type=1006 audit(1768877865.131:1009): pid=7386 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=38 res=1 Jan 20 02:57:45.208651 kernel: audit: type=1300 audit(1768877865.131:1009): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa50b2de0 a2=3 a3=0 items=0 ppid=1 pid=7386 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=38 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:57:45.131000 audit[7386]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa50b2de0 a2=3 a3=0 items=0 ppid=1 pid=7386 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=38 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:57:45.249384 kernel: audit: type=1327 audit(1768877865.131:1009): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:57:45.131000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:57:45.289887 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 20 02:57:45.397000 audit[7386]: USER_START pid=7386 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:45.512280 kernel: audit: type=1105 audit(1768877865.397:1010): pid=7386 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:45.512441 kernel: audit: type=1103 audit(1768877865.446:1011): pid=7390 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:45.446000 audit[7390]: CRED_ACQ pid=7390 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:45.760022 kubelet[2963]: E0120 02:57:45.746721 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:57:46.617629 sshd[7390]: Connection closed by 10.0.0.1 port 41608 Jan 20 02:57:46.623907 sshd-session[7386]: pam_unix(sshd:session): session closed for user core Jan 20 02:57:46.626000 audit[7386]: USER_END pid=7386 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:46.705792 kernel: audit: type=1106 audit(1768877866.626:1012): pid=7386 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:46.707584 systemd[1]: sshd@37-10.0.0.129:22-10.0.0.1:41608.service: Deactivated successfully. Jan 20 02:57:46.626000 audit[7386]: CRED_DISP pid=7386 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:46.726673 kubelet[2963]: E0120 02:57:46.725801 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:57:46.740707 systemd[1]: session-38.scope: Deactivated successfully. Jan 20 02:57:46.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-10.0.0.129:22-10.0.0.1:41608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:57:46.773115 kernel: audit: type=1104 audit(1768877866.626:1013): pid=7386 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:46.775375 systemd-logind[1612]: Session 38 logged out. Waiting for processes to exit. Jan 20 02:57:46.802423 systemd-logind[1612]: Removed session 38. Jan 20 02:57:47.867735 kubelet[2963]: E0120 02:57:47.855623 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:57:51.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-10.0.0.129:22-10.0.0.1:41636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:57:51.676144 systemd[1]: Started sshd@38-10.0.0.129:22-10.0.0.1:41636.service - OpenSSH per-connection server daemon (10.0.0.1:41636). Jan 20 02:57:51.690550 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:57:51.690641 kernel: audit: type=1130 audit(1768877871.675:1015): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-10.0.0.129:22-10.0.0.1:41636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:57:52.157000 audit[7403]: USER_ACCT pid=7403 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:52.159785 sshd[7403]: Accepted publickey for core from 10.0.0.1 port 41636 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:57:52.179463 sshd-session[7403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:57:52.166000 audit[7403]: CRED_ACQ pid=7403 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:52.256104 systemd-logind[1612]: New session 39 of user core. Jan 20 02:57:52.328621 kernel: audit: type=1101 audit(1768877872.157:1016): pid=7403 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:52.334736 kernel: audit: type=1103 audit(1768877872.166:1017): pid=7403 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:52.334806 kernel: audit: type=1006 audit(1768877872.167:1018): pid=7403 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=39 res=1 Jan 20 02:57:52.391766 kernel: audit: type=1300 audit(1768877872.167:1018): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc571a84e0 a2=3 a3=0 items=0 ppid=1 pid=7403 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=39 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:57:52.167000 audit[7403]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc571a84e0 a2=3 a3=0 items=0 ppid=1 pid=7403 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=39 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:57:52.421984 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 20 02:57:52.167000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:57:52.556330 kernel: audit: type=1327 audit(1768877872.167:1018): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:57:52.492000 audit[7403]: USER_START pid=7403 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:52.654627 kernel: audit: type=1105 audit(1768877872.492:1019): pid=7403 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:52.561000 audit[7406]: CRED_ACQ pid=7406 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:52.717584 kubelet[2963]: E0120 02:57:52.717361 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:57:52.731638 kernel: audit: type=1103 audit(1768877872.561:1020): pid=7406 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:53.410605 sshd[7406]: Connection closed by 10.0.0.1 port 41636 Jan 20 02:57:53.415000 audit[7403]: USER_END pid=7403 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:53.412553 sshd-session[7403]: pam_unix(sshd:session): session closed for user core Jan 20 02:57:53.440703 systemd[1]: sshd@38-10.0.0.129:22-10.0.0.1:41636.service: Deactivated successfully. Jan 20 02:57:53.442563 systemd-logind[1612]: Session 39 logged out. Waiting for processes to exit. Jan 20 02:57:53.453397 systemd[1]: session-39.scope: Deactivated successfully. Jan 20 02:57:53.464383 systemd-logind[1612]: Removed session 39. Jan 20 02:57:53.514074 kernel: audit: type=1106 audit(1768877873.415:1021): pid=7403 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:53.514235 kernel: audit: type=1104 audit(1768877873.415:1022): pid=7403 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:53.415000 audit[7403]: CRED_DISP pid=7403 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:53.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-10.0.0.129:22-10.0.0.1:41636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:57:53.757180 kubelet[2963]: E0120 02:57:53.757029 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:57:56.748065 kubelet[2963]: E0120 02:57:56.747683 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:57:56.777804 kubelet[2963]: E0120 02:57:56.777390 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:57:58.541233 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:57:58.541414 kernel: audit: type=1130 audit(1768877878.529:1024): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-10.0.0.129:22-10.0.0.1:60038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:57:58.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-10.0.0.129:22-10.0.0.1:60038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:57:58.529972 systemd[1]: Started sshd@39-10.0.0.129:22-10.0.0.1:60038.service - OpenSSH per-connection server daemon (10.0.0.1:60038). Jan 20 02:57:58.979000 audit[7421]: USER_ACCT pid=7421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:58.995793 sshd[7421]: Accepted publickey for core from 10.0.0.1 port 60038 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:57:59.012674 sshd-session[7421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:57:59.071287 kernel: audit: type=1101 audit(1768877878.979:1025): pid=7421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:59.007000 audit[7421]: CRED_ACQ pid=7421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:59.177567 systemd-logind[1612]: New session 40 of user core. Jan 20 02:57:59.215199 kernel: audit: type=1103 audit(1768877879.007:1026): pid=7421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:59.215322 kernel: audit: type=1006 audit(1768877879.007:1027): pid=7421 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=40 res=1 Jan 20 02:57:59.215350 kernel: audit: type=1300 audit(1768877879.007:1027): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc97280950 a2=3 a3=0 items=0 ppid=1 pid=7421 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=40 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:57:59.007000 audit[7421]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc97280950 a2=3 a3=0 items=0 ppid=1 pid=7421 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=40 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:57:59.292874 kernel: audit: type=1327 audit(1768877879.007:1027): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:57:59.007000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:57:59.327926 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 20 02:57:59.429000 audit[7421]: USER_START pid=7421 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:59.517230 kernel: audit: type=1105 audit(1768877879.429:1028): pid=7421 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:59.434000 audit[7424]: CRED_ACQ pid=7424 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:59.614119 kernel: audit: type=1103 audit(1768877879.434:1029): pid=7424 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:57:59.773997 kubelet[2963]: E0120 02:57:59.747833 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:58:00.643632 sshd[7424]: Connection closed by 10.0.0.1 port 60038 Jan 20 02:58:00.652790 sshd-session[7421]: pam_unix(sshd:session): session closed for user core Jan 20 02:58:00.677000 audit[7421]: USER_END pid=7421 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:00.691547 systemd[1]: sshd@39-10.0.0.129:22-10.0.0.1:60038.service: Deactivated successfully. Jan 20 02:58:00.700712 systemd[1]: session-40.scope: Deactivated successfully. Jan 20 02:58:00.714749 systemd-logind[1612]: Session 40 logged out. Waiting for processes to exit. Jan 20 02:58:00.732555 kernel: audit: type=1106 audit(1768877880.677:1030): pid=7421 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:00.732651 kernel: audit: type=1104 audit(1768877880.677:1031): pid=7421 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:00.677000 audit[7421]: CRED_DISP pid=7421 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:00.743375 systemd-logind[1612]: Removed session 40. Jan 20 02:58:00.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-10.0.0.129:22-10.0.0.1:60038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:58:01.733882 kubelet[2963]: E0120 02:58:01.730143 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:58:03.719353 kubelet[2963]: E0120 02:58:03.719186 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:58:05.804302 kubelet[2963]: E0120 02:58:05.785358 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:58:05.833830 systemd[1]: Started sshd@40-10.0.0.129:22-10.0.0.1:49312.service - OpenSSH per-connection server daemon (10.0.0.1:49312). Jan 20 02:58:05.869689 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:58:05.869879 kernel: audit: type=1130 audit(1768877885.838:1033): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-10.0.0.129:22-10.0.0.1:49312 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:58:05.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-10.0.0.129:22-10.0.0.1:49312 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:58:06.329000 audit[7463]: USER_ACCT pid=7463 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:06.341910 sshd[7463]: Accepted publickey for core from 10.0.0.1 port 49312 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:58:06.347887 sshd-session[7463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:58:06.392571 kernel: audit: type=1101 audit(1768877886.329:1034): pid=7463 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:06.392726 kernel: audit: type=1103 audit(1768877886.346:1035): pid=7463 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:06.346000 audit[7463]: CRED_ACQ pid=7463 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:06.480020 systemd-logind[1612]: New session 41 of user core. Jan 20 02:58:06.498561 kernel: audit: type=1006 audit(1768877886.346:1036): pid=7463 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=41 res=1 Jan 20 02:58:06.498701 kernel: audit: type=1300 audit(1768877886.346:1036): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffefd09c550 a2=3 a3=0 items=0 ppid=1 pid=7463 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=41 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:58:06.346000 audit[7463]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffefd09c550 a2=3 a3=0 items=0 ppid=1 pid=7463 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=41 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:58:06.563775 kernel: audit: type=1327 audit(1768877886.346:1036): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:58:06.346000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:58:06.590309 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 20 02:58:06.626000 audit[7463]: USER_START pid=7463 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:06.638000 audit[7466]: CRED_ACQ pid=7466 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:06.786719 kernel: audit: type=1105 audit(1768877886.626:1037): pid=7463 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:06.786865 kernel: audit: type=1103 audit(1768877886.638:1038): pid=7466 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:07.359990 sshd[7466]: Connection closed by 10.0.0.1 port 49312 Jan 20 02:58:07.364000 audit[7463]: USER_END pid=7463 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:07.391078 systemd[1]: sshd@40-10.0.0.129:22-10.0.0.1:49312.service: Deactivated successfully. Jan 20 02:58:07.365097 sshd-session[7463]: pam_unix(sshd:session): session closed for user core Jan 20 02:58:07.405540 systemd[1]: session-41.scope: Deactivated successfully. Jan 20 02:58:07.408549 kernel: audit: type=1106 audit(1768877887.364:1039): pid=7463 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:07.364000 audit[7463]: CRED_DISP pid=7463 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:07.420846 systemd-logind[1612]: Session 41 logged out. Waiting for processes to exit. Jan 20 02:58:07.427037 systemd-logind[1612]: Removed session 41. Jan 20 02:58:07.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-10.0.0.129:22-10.0.0.1:49312 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:58:07.474647 kernel: audit: type=1104 audit(1768877887.364:1040): pid=7463 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:07.730346 kubelet[2963]: E0120 02:58:07.730162 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:58:10.722252 kubelet[2963]: E0120 02:58:10.714237 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:58:11.743919 kubelet[2963]: E0120 02:58:11.743821 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:58:12.504083 systemd[1]: Started sshd@41-10.0.0.129:22-10.0.0.1:49336.service - OpenSSH per-connection server daemon (10.0.0.1:49336). Jan 20 02:58:12.597193 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:58:12.605721 kernel: audit: type=1130 audit(1768877892.513:1042): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-10.0.0.129:22-10.0.0.1:49336 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:58:12.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-10.0.0.129:22-10.0.0.1:49336 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:58:12.733310 kubelet[2963]: E0120 02:58:12.732636 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:58:12.733310 kubelet[2963]: E0120 02:58:12.732976 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:58:13.052000 audit[7481]: USER_ACCT pid=7481 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:13.184689 kernel: audit: type=1101 audit(1768877893.052:1043): pid=7481 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:13.216836 kernel: audit: type=1103 audit(1768877893.067:1044): pid=7481 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:13.431004 kernel: audit: type=1006 audit(1768877893.067:1045): pid=7481 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=42 res=1 Jan 20 02:58:13.495338 kernel: audit: type=1300 audit(1768877893.067:1045): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd0b90280 a2=3 a3=0 items=0 ppid=1 pid=7481 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=42 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:58:13.986100 kernel: audit: type=1327 audit(1768877893.067:1045): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:58:13.067000 audit[7481]: CRED_ACQ pid=7481 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:13.067000 audit[7481]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd0b90280 a2=3 a3=0 items=0 ppid=1 pid=7481 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=42 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:58:13.067000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:58:13.085635 sshd-session[7481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:58:14.621006 sshd[7481]: Accepted publickey for core from 10.0.0.1 port 49336 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:58:14.735200 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 20 02:58:14.743956 systemd-logind[1612]: New session 42 of user core. Jan 20 02:58:14.876000 audit[7481]: USER_START pid=7481 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:14.896087 kubelet[2963]: E0120 02:58:14.896040 2963 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.182s" Jan 20 02:58:14.911816 kubelet[2963]: E0120 02:58:14.911770 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:58:14.916761 kernel: audit: type=1105 audit(1768877894.876:1046): pid=7481 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:14.982053 kernel: audit: type=1103 audit(1768877894.900:1047): pid=7484 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:14.900000 audit[7484]: CRED_ACQ pid=7484 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:15.481779 sshd[7484]: Connection closed by 10.0.0.1 port 49336 Jan 20 02:58:15.488787 sshd-session[7481]: pam_unix(sshd:session): session closed for user core Jan 20 02:58:15.557561 kernel: audit: type=1106 audit(1768877895.503:1048): pid=7481 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:15.503000 audit[7481]: USER_END pid=7481 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:15.511000 audit[7481]: CRED_DISP pid=7481 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:15.588734 systemd[1]: sshd@41-10.0.0.129:22-10.0.0.1:49336.service: Deactivated successfully. Jan 20 02:58:15.602201 systemd[1]: session-42.scope: Deactivated successfully. Jan 20 02:58:15.606749 kernel: audit: type=1104 audit(1768877895.511:1049): pid=7481 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:15.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-10.0.0.129:22-10.0.0.1:49336 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:58:15.627177 systemd-logind[1612]: Session 42 logged out. Waiting for processes to exit. Jan 20 02:58:15.643906 systemd-logind[1612]: Removed session 42. Jan 20 02:58:17.756443 kubelet[2963]: E0120 02:58:17.756107 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:58:19.721549 kubelet[2963]: E0120 02:58:19.721404 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:58:20.538004 systemd[1]: Started sshd@42-10.0.0.129:22-10.0.0.1:58016.service - OpenSSH per-connection server daemon (10.0.0.1:58016). Jan 20 02:58:20.547322 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:58:20.548244 kernel: audit: type=1130 audit(1768877900.537:1051): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@42-10.0.0.129:22-10.0.0.1:58016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:58:20.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@42-10.0.0.129:22-10.0.0.1:58016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:58:20.720558 kubelet[2963]: E0120 02:58:20.720209 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:58:20.841000 audit[7498]: USER_ACCT pid=7498 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:20.845142 sshd-session[7498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:58:20.883302 sshd[7498]: Accepted publickey for core from 10.0.0.1 port 58016 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:58:20.885124 kernel: audit: type=1101 audit(1768877900.841:1052): pid=7498 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:20.843000 audit[7498]: CRED_ACQ pid=7498 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:20.887805 systemd-logind[1612]: New session 43 of user core. Jan 20 02:58:20.944447 kernel: audit: type=1103 audit(1768877900.843:1053): pid=7498 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:20.944683 kernel: audit: type=1006 audit(1768877900.843:1054): pid=7498 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=43 res=1 Jan 20 02:58:20.944742 kernel: audit: type=1300 audit(1768877900.843:1054): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1e2f5f90 a2=3 a3=0 items=0 ppid=1 pid=7498 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=43 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:58:20.843000 audit[7498]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1e2f5f90 a2=3 a3=0 items=0 ppid=1 pid=7498 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=43 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:58:20.843000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:58:20.987934 systemd[1]: Started session-43.scope - Session 43 of User core. Jan 20 02:58:20.994768 kernel: audit: type=1327 audit(1768877900.843:1054): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:58:21.027000 audit[7498]: USER_START pid=7498 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:21.069703 kernel: audit: type=1105 audit(1768877901.027:1055): pid=7498 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:21.060000 audit[7501]: CRED_ACQ pid=7501 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:21.089737 kernel: audit: type=1103 audit(1768877901.060:1056): pid=7501 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:22.030573 sshd[7501]: Connection closed by 10.0.0.1 port 58016 Jan 20 02:58:22.033226 sshd-session[7498]: pam_unix(sshd:session): session closed for user core Jan 20 02:58:22.037000 audit[7498]: USER_END pid=7498 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:22.080207 systemd[1]: sshd@42-10.0.0.129:22-10.0.0.1:58016.service: Deactivated successfully. Jan 20 02:58:22.123056 kernel: audit: type=1106 audit(1768877902.037:1057): pid=7498 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:22.123185 kernel: audit: type=1104 audit(1768877902.037:1058): pid=7498 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:22.037000 audit[7498]: CRED_DISP pid=7498 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:22.109742 systemd[1]: session-43.scope: Deactivated successfully. Jan 20 02:58:22.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@42-10.0.0.129:22-10.0.0.1:58016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:58:22.127808 systemd-logind[1612]: Session 43 logged out. Waiting for processes to exit. Jan 20 02:58:22.135078 systemd-logind[1612]: Removed session 43. Jan 20 02:58:22.726829 kubelet[2963]: E0120 02:58:22.726460 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:58:24.717982 kubelet[2963]: E0120 02:58:24.717903 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:58:25.735029 kubelet[2963]: E0120 02:58:25.728754 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:58:25.751088 kubelet[2963]: E0120 02:58:25.740636 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:58:27.173941 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:58:27.174106 kernel: audit: type=1130 audit(1768877907.095:1060): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@43-10.0.0.129:22-10.0.0.1:35074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:58:27.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@43-10.0.0.129:22-10.0.0.1:35074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:58:27.093109 systemd[1]: Started sshd@43-10.0.0.129:22-10.0.0.1:35074.service - OpenSSH per-connection server daemon (10.0.0.1:35074). Jan 20 02:58:27.433000 audit[7523]: USER_ACCT pid=7523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:27.478165 sshd[7523]: Accepted publickey for core from 10.0.0.1 port 35074 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:58:27.478722 kernel: audit: type=1101 audit(1768877907.433:1061): pid=7523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:27.444091 sshd-session[7523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:58:27.441000 audit[7523]: CRED_ACQ pid=7523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:27.492367 systemd-logind[1612]: New session 44 of user core. Jan 20 02:58:27.526208 kernel: audit: type=1103 audit(1768877907.441:1062): pid=7523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:27.526366 kernel: audit: type=1006 audit(1768877907.441:1063): pid=7523 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=44 res=1 Jan 20 02:58:27.441000 audit[7523]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0b2241e0 a2=3 a3=0 items=0 ppid=1 pid=7523 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=44 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:58:27.594419 kernel: audit: type=1300 audit(1768877907.441:1063): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0b2241e0 a2=3 a3=0 items=0 ppid=1 pid=7523 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=44 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:58:27.600184 kernel: audit: type=1327 audit(1768877907.441:1063): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:58:27.441000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:58:27.614803 systemd[1]: Started session-44.scope - Session 44 of User core. Jan 20 02:58:27.641000 audit[7523]: USER_START pid=7523 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:27.697178 kernel: audit: type=1105 audit(1768877907.641:1064): pid=7523 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:27.687000 audit[7534]: CRED_ACQ pid=7534 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:27.758580 kernel: audit: type=1103 audit(1768877907.687:1065): pid=7534 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:27.774609 kubelet[2963]: E0120 02:58:27.773971 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:58:28.453680 sshd[7534]: Connection closed by 10.0.0.1 port 35074 Jan 20 02:58:28.459748 sshd-session[7523]: pam_unix(sshd:session): session closed for user core Jan 20 02:58:28.470000 audit[7523]: USER_END pid=7523 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:28.483704 systemd[1]: sshd@43-10.0.0.129:22-10.0.0.1:35074.service: Deactivated successfully. Jan 20 02:58:28.489184 systemd[1]: session-44.scope: Deactivated successfully. Jan 20 02:58:28.502579 systemd-logind[1612]: Session 44 logged out. Waiting for processes to exit. Jan 20 02:58:28.509718 systemd-logind[1612]: Removed session 44. Jan 20 02:58:28.557455 kernel: audit: type=1106 audit(1768877908.470:1066): pid=7523 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:28.557650 kernel: audit: type=1104 audit(1768877908.470:1067): pid=7523 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:28.470000 audit[7523]: CRED_DISP pid=7523 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:28.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@43-10.0.0.129:22-10.0.0.1:35074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:58:32.724844 kubelet[2963]: E0120 02:58:32.722172 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:58:33.541942 systemd[1]: Started sshd@44-10.0.0.129:22-10.0.0.1:35100.service - OpenSSH per-connection server daemon (10.0.0.1:35100). Jan 20 02:58:33.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@44-10.0.0.129:22-10.0.0.1:35100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:58:33.570059 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:58:33.570747 kernel: audit: type=1130 audit(1768877913.553:1069): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@44-10.0.0.129:22-10.0.0.1:35100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:58:33.757571 kubelet[2963]: E0120 02:58:33.750132 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:58:33.774638 kubelet[2963]: E0120 02:58:33.774559 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:58:34.121000 audit[7572]: USER_ACCT pid=7572 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:34.130575 sshd-session[7572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:58:34.136415 sshd[7572]: Accepted publickey for core from 10.0.0.1 port 35100 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:58:34.123000 audit[7572]: CRED_ACQ pid=7572 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:34.177021 systemd-logind[1612]: New session 45 of user core. Jan 20 02:58:34.216842 kernel: audit: type=1101 audit(1768877914.121:1070): pid=7572 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:34.217002 kernel: audit: type=1103 audit(1768877914.123:1071): pid=7572 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:34.217054 kernel: audit: type=1006 audit(1768877914.123:1072): pid=7572 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=45 res=1 Jan 20 02:58:34.256766 kernel: audit: type=1300 audit(1768877914.123:1072): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd037bce0 a2=3 a3=0 items=0 ppid=1 pid=7572 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=45 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:58:34.123000 audit[7572]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd037bce0 a2=3 a3=0 items=0 ppid=1 pid=7572 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=45 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:58:34.123000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:58:34.310221 kernel: audit: type=1327 audit(1768877914.123:1072): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:58:34.312871 systemd[1]: Started session-45.scope - Session 45 of User core. Jan 20 02:58:34.343000 audit[7572]: USER_START pid=7572 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:34.377583 kernel: audit: type=1105 audit(1768877914.343:1073): pid=7572 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:34.361000 audit[7575]: CRED_ACQ pid=7575 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:34.393631 kernel: audit: type=1103 audit(1768877914.361:1074): pid=7575 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:34.790562 sshd[7575]: Connection closed by 10.0.0.1 port 35100 Jan 20 02:58:34.786582 sshd-session[7572]: pam_unix(sshd:session): session closed for user core Jan 20 02:58:34.800000 audit[7572]: USER_END pid=7572 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:34.816250 systemd[1]: sshd@44-10.0.0.129:22-10.0.0.1:35100.service: Deactivated successfully. Jan 20 02:58:34.835067 systemd[1]: session-45.scope: Deactivated successfully. Jan 20 02:58:34.842211 systemd-logind[1612]: Session 45 logged out. Waiting for processes to exit. Jan 20 02:58:34.845817 systemd-logind[1612]: Removed session 45. Jan 20 02:58:34.800000 audit[7572]: CRED_DISP pid=7572 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:34.889230 kernel: audit: type=1106 audit(1768877914.800:1075): pid=7572 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:34.890647 kernel: audit: type=1104 audit(1768877914.800:1076): pid=7572 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:34.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@44-10.0.0.129:22-10.0.0.1:35100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:58:37.756276 kubelet[2963]: E0120 02:58:37.756183 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:58:39.740931 kubelet[2963]: E0120 02:58:39.740712 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:58:39.929160 systemd[1]: Started sshd@45-10.0.0.129:22-10.0.0.1:54298.service - OpenSSH per-connection server daemon (10.0.0.1:54298). Jan 20 02:58:39.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@45-10.0.0.129:22-10.0.0.1:54298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:58:39.950652 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:58:39.950793 kernel: audit: type=1130 audit(1768877919.934:1078): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@45-10.0.0.129:22-10.0.0.1:54298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:58:40.223000 audit[7597]: USER_ACCT pid=7597 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:40.232563 sshd[7597]: Accepted publickey for core from 10.0.0.1 port 54298 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:58:40.232170 sshd-session[7597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:58:40.262361 systemd-logind[1612]: New session 46 of user core. Jan 20 02:58:40.274876 kernel: audit: type=1101 audit(1768877920.223:1079): pid=7597 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:40.229000 audit[7597]: CRED_ACQ pid=7597 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:40.317860 kernel: audit: type=1103 audit(1768877920.229:1080): pid=7597 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:40.318018 kernel: audit: type=1006 audit(1768877920.230:1081): pid=7597 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=46 res=1 Jan 20 02:58:40.230000 audit[7597]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf87a1090 a2=3 a3=0 items=0 ppid=1 pid=7597 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=46 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:58:40.411881 kernel: audit: type=1300 audit(1768877920.230:1081): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf87a1090 a2=3 a3=0 items=0 ppid=1 pid=7597 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=46 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:58:40.230000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:58:40.428987 systemd[1]: Started session-46.scope - Session 46 of User core. Jan 20 02:58:40.469587 kernel: audit: type=1327 audit(1768877920.230:1081): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:58:40.442000 audit[7597]: USER_START pid=7597 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:40.531413 kernel: audit: type=1105 audit(1768877920.442:1082): pid=7597 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:40.534550 kernel: audit: type=1103 audit(1768877920.459:1083): pid=7600 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:40.459000 audit[7600]: CRED_ACQ pid=7600 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:40.738346 kubelet[2963]: E0120 02:58:40.724428 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:58:41.264630 sshd[7600]: Connection closed by 10.0.0.1 port 54298 Jan 20 02:58:41.266628 sshd-session[7597]: pam_unix(sshd:session): session closed for user core Jan 20 02:58:41.266000 audit[7597]: USER_END pid=7597 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:41.277069 systemd[1]: sshd@45-10.0.0.129:22-10.0.0.1:54298.service: Deactivated successfully. Jan 20 02:58:41.289278 systemd[1]: session-46.scope: Deactivated successfully. Jan 20 02:58:41.319462 systemd-logind[1612]: Session 46 logged out. Waiting for processes to exit. Jan 20 02:58:41.331849 systemd-logind[1612]: Removed session 46. Jan 20 02:58:41.355549 kernel: audit: type=1106 audit(1768877921.266:1084): pid=7597 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:41.355717 kernel: audit: type=1104 audit(1768877921.266:1085): pid=7597 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:41.266000 audit[7597]: CRED_DISP pid=7597 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:41.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@45-10.0.0.129:22-10.0.0.1:54298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:58:43.750387 kubelet[2963]: E0120 02:58:43.746692 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:58:45.767682 kubelet[2963]: E0120 02:58:45.767455 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:58:45.773828 kubelet[2963]: E0120 02:58:45.767862 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:58:46.394270 systemd[1]: Started sshd@46-10.0.0.129:22-10.0.0.1:43348.service - OpenSSH per-connection server daemon (10.0.0.1:43348). Jan 20 02:58:46.421550 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:58:46.421648 kernel: audit: type=1130 audit(1768877926.393:1087): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@46-10.0.0.129:22-10.0.0.1:43348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:58:46.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@46-10.0.0.129:22-10.0.0.1:43348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:58:46.744921 kubelet[2963]: E0120 02:58:46.735578 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:58:46.787636 sshd[7613]: Accepted publickey for core from 10.0.0.1 port 43348 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:58:46.865437 kernel: audit: type=1101 audit(1768877926.782:1088): pid=7613 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:46.865819 kernel: audit: type=1103 audit(1768877926.786:1089): pid=7613 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:46.782000 audit[7613]: USER_ACCT pid=7613 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:46.786000 audit[7613]: CRED_ACQ pid=7613 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:46.791092 sshd-session[7613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:58:46.943379 kernel: audit: type=1006 audit(1768877926.786:1090): pid=7613 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=47 res=1 Jan 20 02:58:46.943555 kernel: audit: type=1300 audit(1768877926.786:1090): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff9f150680 a2=3 a3=0 items=0 ppid=1 pid=7613 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=47 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:58:46.786000 audit[7613]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff9f150680 a2=3 a3=0 items=0 ppid=1 pid=7613 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=47 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:58:46.953793 systemd-logind[1612]: New session 47 of user core. Jan 20 02:58:46.786000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:58:47.012584 kernel: audit: type=1327 audit(1768877926.786:1090): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:58:47.049712 systemd[1]: Started session-47.scope - Session 47 of User core. Jan 20 02:58:47.090000 audit[7613]: USER_START pid=7613 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:47.165599 kernel: audit: type=1105 audit(1768877927.090:1091): pid=7613 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:47.140000 audit[7617]: CRED_ACQ pid=7617 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:47.234071 kernel: audit: type=1103 audit(1768877927.140:1092): pid=7617 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:47.720589 kubelet[2963]: E0120 02:58:47.720414 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:58:47.951835 sshd[7617]: Connection closed by 10.0.0.1 port 43348 Jan 20 02:58:47.953921 sshd-session[7613]: pam_unix(sshd:session): session closed for user core Jan 20 02:58:47.966000 audit[7613]: USER_END pid=7613 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:47.982045 systemd[1]: sshd@46-10.0.0.129:22-10.0.0.1:43348.service: Deactivated successfully. Jan 20 02:58:48.008887 systemd[1]: session-47.scope: Deactivated successfully. Jan 20 02:58:48.046759 systemd-logind[1612]: Session 47 logged out. Waiting for processes to exit. Jan 20 02:58:48.051776 systemd-logind[1612]: Removed session 47. Jan 20 02:58:47.966000 audit[7613]: CRED_DISP pid=7613 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:48.123345 kernel: audit: type=1106 audit(1768877927.966:1093): pid=7613 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:48.123575 kernel: audit: type=1104 audit(1768877927.966:1094): pid=7613 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:47.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@46-10.0.0.129:22-10.0.0.1:43348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:58:48.743597 kubelet[2963]: E0120 02:58:48.743443 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:58:51.742309 kubelet[2963]: E0120 02:58:51.740786 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:58:53.067916 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:58:53.068010 kernel: audit: type=1130 audit(1768877933.061:1096): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@47-10.0.0.129:22-10.0.0.1:43352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:58:53.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@47-10.0.0.129:22-10.0.0.1:43352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:58:53.062335 systemd[1]: Started sshd@47-10.0.0.129:22-10.0.0.1:43352.service - OpenSSH per-connection server daemon (10.0.0.1:43352). Jan 20 02:58:53.275000 audit[7631]: USER_ACCT pid=7631 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:53.293327 sshd-session[7631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:58:53.303230 sshd[7631]: Accepted publickey for core from 10.0.0.1 port 43352 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:58:53.331793 systemd-logind[1612]: New session 48 of user core. Jan 20 02:58:53.341981 kernel: audit: type=1101 audit(1768877933.275:1097): pid=7631 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:53.342044 kernel: audit: type=1103 audit(1768877933.291:1098): pid=7631 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:53.291000 audit[7631]: CRED_ACQ pid=7631 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:53.373082 kernel: audit: type=1006 audit(1768877933.291:1099): pid=7631 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=48 res=1 Jan 20 02:58:53.408911 systemd[1]: Started session-48.scope - Session 48 of User core. Jan 20 02:58:53.291000 audit[7631]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe32c391b0 a2=3 a3=0 items=0 ppid=1 pid=7631 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=48 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:58:53.486211 kernel: audit: type=1300 audit(1768877933.291:1099): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe32c391b0 a2=3 a3=0 items=0 ppid=1 pid=7631 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=48 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:58:53.486358 kernel: audit: type=1327 audit(1768877933.291:1099): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:58:53.291000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:58:53.442000 audit[7631]: USER_START pid=7631 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:53.588650 kernel: audit: type=1105 audit(1768877933.442:1100): pid=7631 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:53.458000 audit[7634]: CRED_ACQ pid=7634 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:53.638305 kernel: audit: type=1103 audit(1768877933.458:1101): pid=7634 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:53.833631 kubelet[2963]: E0120 02:58:53.820095 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:58:54.298620 sshd[7634]: Connection closed by 10.0.0.1 port 43352 Jan 20 02:58:54.299577 sshd-session[7631]: pam_unix(sshd:session): session closed for user core Jan 20 02:58:54.308000 audit[7631]: USER_END pid=7631 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:54.344337 systemd[1]: sshd@47-10.0.0.129:22-10.0.0.1:43352.service: Deactivated successfully. Jan 20 02:58:54.404592 kernel: audit: type=1106 audit(1768877934.308:1102): pid=7631 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:54.371112 systemd[1]: session-48.scope: Deactivated successfully. Jan 20 02:58:54.308000 audit[7631]: CRED_DISP pid=7631 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:54.442720 systemd-logind[1612]: Session 48 logged out. Waiting for processes to exit. Jan 20 02:58:54.447639 systemd-logind[1612]: Removed session 48. Jan 20 02:58:54.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@47-10.0.0.129:22-10.0.0.1:43352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:58:54.501335 kernel: audit: type=1104 audit(1768877934.308:1103): pid=7631 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:54.718244 kubelet[2963]: E0120 02:58:54.715962 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:58:55.787420 kubelet[2963]: E0120 02:58:55.780033 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:58:57.730915 kubelet[2963]: E0120 02:58:57.723732 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:58:58.718086 kubelet[2963]: E0120 02:58:58.716977 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:58:59.516536 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:58:59.516630 kernel: audit: type=1130 audit(1768877939.482:1105): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@48-10.0.0.129:22-10.0.0.1:47228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:58:59.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@48-10.0.0.129:22-10.0.0.1:47228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:58:59.491968 systemd[1]: Started sshd@48-10.0.0.129:22-10.0.0.1:47228.service - OpenSSH per-connection server daemon (10.0.0.1:47228). Jan 20 02:58:59.764890 kubelet[2963]: E0120 02:58:59.764754 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:58:59.931000 audit[7649]: USER_ACCT pid=7649 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:59.942401 sshd[7649]: Accepted publickey for core from 10.0.0.1 port 47228 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:58:59.944708 sshd-session[7649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:59:00.008761 systemd-logind[1612]: New session 49 of user core. Jan 20 02:59:00.035267 kernel: audit: type=1101 audit(1768877939.931:1106): pid=7649 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:00.035360 kernel: audit: type=1103 audit(1768877939.931:1107): pid=7649 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:58:59.931000 audit[7649]: CRED_ACQ pid=7649 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:00.143148 kernel: audit: type=1006 audit(1768877939.931:1108): pid=7649 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=49 res=1 Jan 20 02:58:59.931000 audit[7649]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdfb54c8e0 a2=3 a3=0 items=0 ppid=1 pid=7649 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=49 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:59:00.207287 systemd[1]: Started session-49.scope - Session 49 of User core. Jan 20 02:59:00.219959 kernel: audit: type=1300 audit(1768877939.931:1108): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdfb54c8e0 a2=3 a3=0 items=0 ppid=1 pid=7649 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=49 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:59:00.220077 kernel: audit: type=1327 audit(1768877939.931:1108): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:58:59.931000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:59:00.264407 kernel: audit: type=1105 audit(1768877940.228:1109): pid=7649 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:00.228000 audit[7649]: USER_START pid=7649 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:00.267000 audit[7663]: CRED_ACQ pid=7663 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:00.410284 kernel: audit: type=1103 audit(1768877940.267:1110): pid=7663 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:01.467611 sshd[7663]: Connection closed by 10.0.0.1 port 47228 Jan 20 02:59:01.469158 sshd-session[7649]: pam_unix(sshd:session): session closed for user core Jan 20 02:59:01.479000 audit[7649]: USER_END pid=7649 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:01.531362 systemd-logind[1612]: Session 49 logged out. Waiting for processes to exit. Jan 20 02:59:01.479000 audit[7649]: CRED_DISP pid=7649 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:01.594352 systemd[1]: sshd@48-10.0.0.129:22-10.0.0.1:47228.service: Deactivated successfully. Jan 20 02:59:01.646272 kernel: audit: type=1106 audit(1768877941.479:1111): pid=7649 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:01.646413 kernel: audit: type=1104 audit(1768877941.479:1112): pid=7649 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:01.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@48-10.0.0.129:22-10.0.0.1:47228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:59:01.644189 systemd[1]: session-49.scope: Deactivated successfully. Jan 20 02:59:01.691667 systemd-logind[1612]: Removed session 49. Jan 20 02:59:01.794328 kubelet[2963]: E0120 02:59:01.789303 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:59:04.732412 kubelet[2963]: E0120 02:59:04.732053 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:59:06.543805 systemd[1]: Started sshd@49-10.0.0.129:22-10.0.0.1:38186.service - OpenSSH per-connection server daemon (10.0.0.1:38186). Jan 20 02:59:06.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@49-10.0.0.129:22-10.0.0.1:38186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:59:06.585327 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:59:06.585457 kernel: audit: type=1130 audit(1768877946.542:1114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@49-10.0.0.129:22-10.0.0.1:38186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:59:06.739341 kubelet[2963]: E0120 02:59:06.738773 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:59:07.496390 sshd[7693]: Accepted publickey for core from 10.0.0.1 port 38186 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:59:07.494000 audit[7693]: USER_ACCT pid=7693 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:07.562016 sshd-session[7693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:59:07.600932 kernel: audit: type=1101 audit(1768877947.494:1115): pid=7693 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:07.509000 audit[7693]: CRED_ACQ pid=7693 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:07.682898 systemd-logind[1612]: New session 50 of user core. Jan 20 02:59:07.743618 kubelet[2963]: E0120 02:59:07.738789 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:59:07.771579 kernel: audit: type=1103 audit(1768877947.509:1116): pid=7693 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:07.771751 kernel: audit: type=1006 audit(1768877947.509:1117): pid=7693 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=50 res=1 Jan 20 02:59:07.771801 kernel: audit: type=1300 audit(1768877947.509:1117): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdee56ab30 a2=3 a3=0 items=0 ppid=1 pid=7693 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=50 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:59:07.509000 audit[7693]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdee56ab30 a2=3 a3=0 items=0 ppid=1 pid=7693 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=50 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:59:07.851065 kernel: audit: type=1327 audit(1768877947.509:1117): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:59:07.509000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:59:07.985843 systemd[1]: Started session-50.scope - Session 50 of User core. Jan 20 02:59:08.071000 audit[7693]: USER_START pid=7693 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:08.142000 audit[7697]: CRED_ACQ pid=7697 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:08.266367 kernel: audit: type=1105 audit(1768877948.071:1118): pid=7693 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:08.266728 kernel: audit: type=1103 audit(1768877948.142:1119): pid=7697 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:08.778956 kubelet[2963]: E0120 02:59:08.775692 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:59:09.673252 sshd[7697]: Connection closed by 10.0.0.1 port 38186 Jan 20 02:59:09.662712 sshd-session[7693]: pam_unix(sshd:session): session closed for user core Jan 20 02:59:09.697000 audit[7693]: USER_END pid=7693 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:09.777221 kernel: audit: type=1106 audit(1768877949.697:1120): pid=7693 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:09.715000 audit[7693]: CRED_DISP pid=7693 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:09.797984 systemd[1]: sshd@49-10.0.0.129:22-10.0.0.1:38186.service: Deactivated successfully. Jan 20 02:59:09.837016 systemd[1]: session-50.scope: Deactivated successfully. Jan 20 02:59:09.877698 kernel: audit: type=1104 audit(1768877949.715:1121): pid=7693 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:09.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@49-10.0.0.129:22-10.0.0.1:38186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:59:09.884358 systemd-logind[1612]: Session 50 logged out. Waiting for processes to exit. Jan 20 02:59:09.896190 systemd-logind[1612]: Removed session 50. Jan 20 02:59:14.726792 kubelet[2963]: E0120 02:59:14.726399 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:59:14.765596 systemd[1]: Started sshd@50-10.0.0.129:22-10.0.0.1:46760.service - OpenSSH per-connection server daemon (10.0.0.1:46760). Jan 20 02:59:14.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@50-10.0.0.129:22-10.0.0.1:46760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:59:14.780403 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:59:14.780593 kernel: audit: type=1130 audit(1768877954.765:1123): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@50-10.0.0.129:22-10.0.0.1:46760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:59:15.123923 sshd[7720]: Accepted publickey for core from 10.0.0.1 port 46760 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:59:15.122000 audit[7720]: USER_ACCT pid=7720 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:15.132180 sshd-session[7720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:59:15.242882 kernel: audit: type=1101 audit(1768877955.122:1124): pid=7720 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:15.248978 kernel: audit: type=1103 audit(1768877955.130:1125): pid=7720 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:15.130000 audit[7720]: CRED_ACQ pid=7720 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:15.249996 systemd-logind[1612]: New session 51 of user core. Jan 20 02:59:15.282774 kernel: audit: type=1006 audit(1768877955.131:1126): pid=7720 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=51 res=1 Jan 20 02:59:15.131000 audit[7720]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc943cefa0 a2=3 a3=0 items=0 ppid=1 pid=7720 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=51 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:59:15.333293 kernel: audit: type=1300 audit(1768877955.131:1126): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc943cefa0 a2=3 a3=0 items=0 ppid=1 pid=7720 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=51 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:59:15.333872 kernel: audit: type=1327 audit(1768877955.131:1126): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:59:15.131000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:59:15.338330 systemd[1]: Started session-51.scope - Session 51 of User core. Jan 20 02:59:15.368000 audit[7720]: USER_START pid=7720 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:15.401137 kernel: audit: type=1105 audit(1768877955.368:1127): pid=7720 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:15.403000 audit[7723]: CRED_ACQ pid=7723 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:15.456742 kernel: audit: type=1103 audit(1768877955.403:1128): pid=7723 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:15.873184 kubelet[2963]: E0120 02:59:15.860181 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:59:16.239838 sshd[7723]: Connection closed by 10.0.0.1 port 46760 Jan 20 02:59:16.242842 sshd-session[7720]: pam_unix(sshd:session): session closed for user core Jan 20 02:59:16.276000 audit[7720]: USER_END pid=7720 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:16.340558 systemd-logind[1612]: Session 51 logged out. Waiting for processes to exit. Jan 20 02:59:16.408067 systemd[1]: sshd@50-10.0.0.129:22-10.0.0.1:46760.service: Deactivated successfully. Jan 20 02:59:16.428199 kernel: audit: type=1106 audit(1768877956.276:1129): pid=7720 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:16.428355 kernel: audit: type=1104 audit(1768877956.276:1130): pid=7720 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:16.276000 audit[7720]: CRED_DISP pid=7720 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:16.432243 systemd[1]: session-51.scope: Deactivated successfully. Jan 20 02:59:16.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@50-10.0.0.129:22-10.0.0.1:46760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:59:16.502395 systemd-logind[1612]: Removed session 51. Jan 20 02:59:16.736155 kubelet[2963]: E0120 02:59:16.735853 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:59:18.743923 kubelet[2963]: E0120 02:59:18.743859 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:59:19.778765 kubelet[2963]: E0120 02:59:19.778264 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:59:20.732182 kubelet[2963]: E0120 02:59:20.731249 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:59:21.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@51-10.0.0.129:22-10.0.0.1:46786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:59:21.309269 systemd[1]: Started sshd@51-10.0.0.129:22-10.0.0.1:46786.service - OpenSSH per-connection server daemon (10.0.0.1:46786). Jan 20 02:59:21.331578 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:59:21.331682 kernel: audit: type=1130 audit(1768877961.307:1132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@51-10.0.0.129:22-10.0.0.1:46786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:59:21.726000 audit[7738]: USER_ACCT pid=7738 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:21.791130 kernel: audit: type=1101 audit(1768877961.726:1133): pid=7738 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:21.791284 sshd[7738]: Accepted publickey for core from 10.0.0.1 port 46786 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:59:21.797284 sshd-session[7738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:59:21.795000 audit[7738]: CRED_ACQ pid=7738 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:21.894285 kernel: audit: type=1103 audit(1768877961.795:1134): pid=7738 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:21.894459 kernel: audit: type=1006 audit(1768877961.796:1135): pid=7738 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=52 res=1 Jan 20 02:59:21.894579 kernel: audit: type=1300 audit(1768877961.796:1135): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde2cd5cc0 a2=3 a3=0 items=0 ppid=1 pid=7738 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=52 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:59:21.796000 audit[7738]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde2cd5cc0 a2=3 a3=0 items=0 ppid=1 pid=7738 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=52 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:59:21.943368 systemd-logind[1612]: New session 52 of user core. Jan 20 02:59:21.970168 kernel: audit: type=1327 audit(1768877961.796:1135): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:59:21.796000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:59:22.026278 systemd[1]: Started session-52.scope - Session 52 of User core. Jan 20 02:59:22.060000 audit[7738]: USER_START pid=7738 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:22.174135 kernel: audit: type=1105 audit(1768877962.060:1136): pid=7738 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:22.174251 kernel: audit: type=1103 audit(1768877962.081:1137): pid=7741 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:22.081000 audit[7741]: CRED_ACQ pid=7741 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:25.306293 sshd[7741]: Connection closed by 10.0.0.1 port 46786 Jan 20 02:59:25.312744 sshd-session[7738]: pam_unix(sshd:session): session closed for user core Jan 20 02:59:25.329000 audit[7738]: USER_END pid=7738 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:25.372880 systemd[1]: sshd@51-10.0.0.129:22-10.0.0.1:46786.service: Deactivated successfully. Jan 20 02:59:25.388422 systemd[1]: session-52.scope: Deactivated successfully. Jan 20 02:59:25.425129 kernel: audit: type=1106 audit(1768877965.329:1138): pid=7738 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:25.329000 audit[7738]: CRED_DISP pid=7738 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:25.444966 systemd-logind[1612]: Session 52 logged out. Waiting for processes to exit. Jan 20 02:59:25.507417 kernel: audit: type=1104 audit(1768877965.329:1139): pid=7738 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:25.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@51-10.0.0.129:22-10.0.0.1:46786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:59:25.523974 systemd-logind[1612]: Removed session 52. Jan 20 02:59:28.762099 kubelet[2963]: E0120 02:59:28.758223 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:59:29.728416 kubelet[2963]: E0120 02:59:29.728349 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:59:29.755296 kubelet[2963]: E0120 02:59:29.755195 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:59:30.444085 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:59:30.444244 kernel: audit: type=1130 audit(1768877970.408:1141): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@52-10.0.0.129:22-10.0.0.1:54890 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:59:30.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@52-10.0.0.129:22-10.0.0.1:54890 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:59:30.405922 systemd[1]: Started sshd@52-10.0.0.129:22-10.0.0.1:54890.service - OpenSSH per-connection server daemon (10.0.0.1:54890). Jan 20 02:59:30.721784 kubelet[2963]: E0120 02:59:30.720798 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:59:30.903000 audit[7774]: USER_ACCT pid=7774 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:30.910745 sshd-session[7774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:59:30.917241 sshd[7774]: Accepted publickey for core from 10.0.0.1 port 54890 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:59:30.965749 kernel: audit: type=1101 audit(1768877970.903:1142): pid=7774 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:30.965851 kernel: audit: type=1103 audit(1768877970.904:1143): pid=7774 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:30.904000 audit[7774]: CRED_ACQ pid=7774 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:30.976713 systemd-logind[1612]: New session 53 of user core. Jan 20 02:59:31.028870 kernel: audit: type=1006 audit(1768877970.904:1144): pid=7774 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=53 res=1 Jan 20 02:59:31.050808 kernel: audit: type=1300 audit(1768877970.904:1144): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffebbccad90 a2=3 a3=0 items=0 ppid=1 pid=7774 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=53 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:59:30.904000 audit[7774]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffebbccad90 a2=3 a3=0 items=0 ppid=1 pid=7774 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=53 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:59:30.904000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:59:31.139196 kernel: audit: type=1327 audit(1768877970.904:1144): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:59:31.157159 systemd[1]: Started session-53.scope - Session 53 of User core. Jan 20 02:59:31.206000 audit[7774]: USER_START pid=7774 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:31.271152 kernel: audit: type=1105 audit(1768877971.206:1145): pid=7774 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:31.271302 kernel: audit: type=1103 audit(1768877971.245:1146): pid=7783 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:31.245000 audit[7783]: CRED_ACQ pid=7783 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:31.701382 sshd[7783]: Connection closed by 10.0.0.1 port 54890 Jan 20 02:59:31.709295 sshd-session[7774]: pam_unix(sshd:session): session closed for user core Jan 20 02:59:31.708000 audit[7774]: USER_END pid=7774 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:31.772908 kernel: audit: type=1106 audit(1768877971.708:1147): pid=7774 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:31.774153 systemd[1]: sshd@52-10.0.0.129:22-10.0.0.1:54890.service: Deactivated successfully. Jan 20 02:59:31.780415 systemd-logind[1612]: Session 53 logged out. Waiting for processes to exit. Jan 20 02:59:31.708000 audit[7774]: CRED_DISP pid=7774 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:31.800291 systemd[1]: session-53.scope: Deactivated successfully. Jan 20 02:59:31.833853 systemd-logind[1612]: Removed session 53. Jan 20 02:59:31.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@52-10.0.0.129:22-10.0.0.1:54890 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:59:31.849854 kernel: audit: type=1104 audit(1768877971.708:1148): pid=7774 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:33.765768 kubelet[2963]: E0120 02:59:33.763747 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:59:33.779208 kubelet[2963]: E0120 02:59:33.778045 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:59:36.759689 systemd[1]: Started sshd@53-10.0.0.129:22-10.0.0.1:52302.service - OpenSSH per-connection server daemon (10.0.0.1:52302). Jan 20 02:59:36.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@53-10.0.0.129:22-10.0.0.1:52302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:59:36.814762 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:59:36.815661 kernel: audit: type=1130 audit(1768877976.762:1150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@53-10.0.0.129:22-10.0.0.1:52302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:59:37.094000 audit[7802]: USER_ACCT pid=7802 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:37.100382 sshd[7802]: Accepted publickey for core from 10.0.0.1 port 52302 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:59:37.109670 sshd-session[7802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:59:37.153198 kernel: audit: type=1101 audit(1768877977.094:1151): pid=7802 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:37.108000 audit[7802]: CRED_ACQ pid=7802 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:37.189815 systemd-logind[1612]: New session 54 of user core. Jan 20 02:59:37.248696 kernel: audit: type=1103 audit(1768877977.108:1152): pid=7802 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:37.262724 kernel: audit: type=1006 audit(1768877977.108:1153): pid=7802 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=54 res=1 Jan 20 02:59:37.262815 kernel: audit: type=1300 audit(1768877977.108:1153): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdad3a0960 a2=3 a3=0 items=0 ppid=1 pid=7802 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=54 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:59:37.108000 audit[7802]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdad3a0960 a2=3 a3=0 items=0 ppid=1 pid=7802 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=54 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:59:37.348980 kernel: audit: type=1327 audit(1768877977.108:1153): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:59:37.108000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:59:37.309368 systemd[1]: Started session-54.scope - Session 54 of User core. Jan 20 02:59:37.347000 audit[7802]: USER_START pid=7802 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:37.372000 audit[7805]: CRED_ACQ pid=7805 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:37.465599 kernel: audit: type=1105 audit(1768877977.347:1154): pid=7802 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:37.465768 kernel: audit: type=1103 audit(1768877977.372:1155): pid=7805 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:38.365102 sshd[7805]: Connection closed by 10.0.0.1 port 52302 Jan 20 02:59:38.363796 sshd-session[7802]: pam_unix(sshd:session): session closed for user core Jan 20 02:59:38.396000 audit[7802]: USER_END pid=7802 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:38.463721 kernel: audit: type=1106 audit(1768877978.396:1156): pid=7802 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:38.412693 systemd[1]: sshd@53-10.0.0.129:22-10.0.0.1:52302.service: Deactivated successfully. Jan 20 02:59:38.423997 systemd[1]: session-54.scope: Deactivated successfully. Jan 20 02:59:38.431447 systemd-logind[1612]: Session 54 logged out. Waiting for processes to exit. Jan 20 02:59:38.468299 systemd-logind[1612]: Removed session 54. Jan 20 02:59:38.396000 audit[7802]: CRED_DISP pid=7802 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:38.558893 kernel: audit: type=1104 audit(1768877978.396:1157): pid=7802 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:38.411000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@53-10.0.0.129:22-10.0.0.1:52302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:59:38.726324 kubelet[2963]: E0120 02:59:38.720447 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:59:38.734647 kubelet[2963]: E0120 02:59:38.729739 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:59:40.731700 kubelet[2963]: E0120 02:59:40.725678 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:59:41.763541 kubelet[2963]: E0120 02:59:41.762588 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:59:42.715167 kubelet[2963]: E0120 02:59:42.715007 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:59:42.737061 kubelet[2963]: E0120 02:59:42.733947 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:59:43.415934 systemd[1]: Started sshd@54-10.0.0.129:22-10.0.0.1:52314.service - OpenSSH per-connection server daemon (10.0.0.1:52314). Jan 20 02:59:43.428874 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:59:43.429059 kernel: audit: type=1130 audit(1768877983.414:1159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@54-10.0.0.129:22-10.0.0.1:52314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:59:43.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@54-10.0.0.129:22-10.0.0.1:52314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:59:43.743450 kubelet[2963]: E0120 02:59:43.743256 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:59:43.743000 audit[7821]: USER_ACCT pid=7821 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:43.761009 sshd[7821]: Accepted publickey for core from 10.0.0.1 port 52314 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:59:43.776726 sshd-session[7821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:59:43.813776 kernel: audit: type=1101 audit(1768877983.743:1160): pid=7821 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:43.813946 kernel: audit: type=1103 audit(1768877983.775:1161): pid=7821 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:43.775000 audit[7821]: CRED_ACQ pid=7821 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:43.803755 systemd-logind[1612]: New session 55 of user core. Jan 20 02:59:43.775000 audit[7821]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7c324780 a2=3 a3=0 items=0 ppid=1 pid=7821 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=55 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:59:43.875746 kernel: audit: type=1006 audit(1768877983.775:1162): pid=7821 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=55 res=1 Jan 20 02:59:43.875943 kernel: audit: type=1300 audit(1768877983.775:1162): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7c324780 a2=3 a3=0 items=0 ppid=1 pid=7821 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=55 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:59:43.880231 kernel: audit: type=1327 audit(1768877983.775:1162): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:59:43.775000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:59:43.883095 systemd[1]: Started session-55.scope - Session 55 of User core. Jan 20 02:59:43.917000 audit[7821]: USER_START pid=7821 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:43.927000 audit[7824]: CRED_ACQ pid=7824 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:44.002572 kernel: audit: type=1105 audit(1768877983.917:1163): pid=7821 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:44.002732 kernel: audit: type=1103 audit(1768877983.927:1164): pid=7824 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:44.275691 sshd[7824]: Connection closed by 10.0.0.1 port 52314 Jan 20 02:59:44.281082 sshd-session[7821]: pam_unix(sshd:session): session closed for user core Jan 20 02:59:44.304063 systemd[1]: sshd@54-10.0.0.129:22-10.0.0.1:52314.service: Deactivated successfully. Jan 20 02:59:44.354818 kernel: audit: type=1106 audit(1768877984.292:1165): pid=7821 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:44.292000 audit[7821]: USER_END pid=7821 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:44.335161 systemd[1]: session-55.scope: Deactivated successfully. Jan 20 02:59:44.345619 systemd-logind[1612]: Session 55 logged out. Waiting for processes to exit. Jan 20 02:59:44.348103 systemd-logind[1612]: Removed session 55. Jan 20 02:59:44.292000 audit[7821]: CRED_DISP pid=7821 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:44.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@54-10.0.0.129:22-10.0.0.1:52314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:59:44.395009 kernel: audit: type=1104 audit(1768877984.292:1166): pid=7821 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:44.719947 kubelet[2963]: E0120 02:59:44.719625 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:59:44.728428 kubelet[2963]: E0120 02:59:44.728381 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:59:49.387614 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:59:49.387918 kernel: audit: type=1130 audit(1768877989.371:1168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@55-10.0.0.129:22-10.0.0.1:54122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:59:49.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@55-10.0.0.129:22-10.0.0.1:54122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:59:49.373635 systemd[1]: Started sshd@55-10.0.0.129:22-10.0.0.1:54122.service - OpenSSH per-connection server daemon (10.0.0.1:54122). Jan 20 02:59:49.762656 kubelet[2963]: E0120 02:59:49.747319 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:59:49.844443 sshd[7837]: Accepted publickey for core from 10.0.0.1 port 54122 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:59:49.842000 audit[7837]: USER_ACCT pid=7837 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:49.860983 sshd-session[7837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:59:49.904581 kernel: audit: type=1101 audit(1768877989.842:1169): pid=7837 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:49.858000 audit[7837]: CRED_ACQ pid=7837 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:49.923620 systemd-logind[1612]: New session 56 of user core. Jan 20 02:59:49.960958 kernel: audit: type=1103 audit(1768877989.858:1170): pid=7837 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:49.961091 kernel: audit: type=1006 audit(1768877989.858:1171): pid=7837 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=56 res=1 Jan 20 02:59:49.858000 audit[7837]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd23b6baf0 a2=3 a3=0 items=0 ppid=1 pid=7837 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=56 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:59:50.028984 kernel: audit: type=1300 audit(1768877989.858:1171): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd23b6baf0 a2=3 a3=0 items=0 ppid=1 pid=7837 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=56 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:59:49.858000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:59:50.060951 kernel: audit: type=1327 audit(1768877989.858:1171): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:59:50.054451 systemd[1]: Started session-56.scope - Session 56 of User core. Jan 20 02:59:50.084000 audit[7837]: USER_START pid=7837 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:50.164596 kernel: audit: type=1105 audit(1768877990.084:1172): pid=7837 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:50.164795 kernel: audit: type=1103 audit(1768877990.108:1173): pid=7840 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:50.108000 audit[7840]: CRED_ACQ pid=7840 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:50.875469 sshd[7840]: Connection closed by 10.0.0.1 port 54122 Jan 20 02:59:50.874828 sshd-session[7837]: pam_unix(sshd:session): session closed for user core Jan 20 02:59:50.874000 audit[7837]: USER_END pid=7837 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:50.896194 systemd[1]: sshd@55-10.0.0.129:22-10.0.0.1:54122.service: Deactivated successfully. Jan 20 02:59:50.918781 systemd[1]: session-56.scope: Deactivated successfully. Jan 20 02:59:50.942336 systemd-logind[1612]: Session 56 logged out. Waiting for processes to exit. Jan 20 02:59:50.874000 audit[7837]: CRED_DISP pid=7837 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:50.955329 systemd-logind[1612]: Removed session 56. Jan 20 02:59:50.995735 kernel: audit: type=1106 audit(1768877990.874:1174): pid=7837 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:50.995919 kernel: audit: type=1104 audit(1768877990.874:1175): pid=7837 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:50.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@55-10.0.0.129:22-10.0.0.1:54122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:59:54.730009 kubelet[2963]: E0120 02:59:54.727883 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 02:59:55.731780 kubelet[2963]: E0120 02:59:55.729818 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:59:55.738089 kubelet[2963]: E0120 02:59:55.735455 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 02:59:55.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@56-10.0.0.129:22-10.0.0.1:40806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:59:55.902992 systemd[1]: Started sshd@56-10.0.0.129:22-10.0.0.1:40806.service - OpenSSH per-connection server daemon (10.0.0.1:40806). Jan 20 02:59:55.916997 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:59:55.917295 kernel: audit: type=1130 audit(1768877995.901:1177): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@56-10.0.0.129:22-10.0.0.1:40806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:59:56.178000 audit[7854]: USER_ACCT pid=7854 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:56.188776 sshd[7854]: Accepted publickey for core from 10.0.0.1 port 40806 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 02:59:56.192364 sshd-session[7854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:59:56.189000 audit[7854]: CRED_ACQ pid=7854 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:56.240928 kernel: audit: type=1101 audit(1768877996.178:1178): pid=7854 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:56.241072 kernel: audit: type=1103 audit(1768877996.189:1179): pid=7854 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:56.190000 audit[7854]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff069dc4b0 a2=3 a3=0 items=0 ppid=1 pid=7854 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=57 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:59:56.291117 kernel: audit: type=1006 audit(1768877996.190:1180): pid=7854 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=57 res=1 Jan 20 02:59:56.294162 kernel: audit: type=1300 audit(1768877996.190:1180): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff069dc4b0 a2=3 a3=0 items=0 ppid=1 pid=7854 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=57 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:59:56.294210 kernel: audit: type=1327 audit(1768877996.190:1180): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:59:56.190000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:59:56.290155 systemd-logind[1612]: New session 57 of user core. Jan 20 02:59:56.314191 systemd[1]: Started session-57.scope - Session 57 of User core. Jan 20 02:59:56.357000 audit[7854]: USER_START pid=7854 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:56.383000 audit[7857]: CRED_ACQ pid=7857 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:56.452967 kernel: audit: type=1105 audit(1768877996.357:1181): pid=7854 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:56.453108 kernel: audit: type=1103 audit(1768877996.383:1182): pid=7857 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:56.731985 kubelet[2963]: E0120 02:59:56.731014 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 02:59:56.731985 kubelet[2963]: E0120 02:59:56.731272 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 02:59:56.736293 kubelet[2963]: E0120 02:59:56.731807 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 02:59:56.875640 sshd[7857]: Connection closed by 10.0.0.1 port 40806 Jan 20 02:59:56.875000 audit[7854]: USER_END pid=7854 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:56.877682 sshd-session[7854]: pam_unix(sshd:session): session closed for user core Jan 20 02:59:56.897729 systemd[1]: sshd@56-10.0.0.129:22-10.0.0.1:40806.service: Deactivated successfully. Jan 20 02:59:56.875000 audit[7854]: CRED_DISP pid=7854 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:56.919613 systemd[1]: session-57.scope: Deactivated successfully. Jan 20 02:59:56.921864 systemd-logind[1612]: Session 57 logged out. Waiting for processes to exit. Jan 20 02:59:56.935688 systemd-logind[1612]: Removed session 57. Jan 20 02:59:56.945798 kernel: audit: type=1106 audit(1768877996.875:1183): pid=7854 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:56.945936 kernel: audit: type=1104 audit(1768877996.875:1184): pid=7854 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:59:56.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@56-10.0.0.129:22-10.0.0.1:40806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:59:57.726846 kubelet[2963]: E0120 02:59:57.726748 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 02:59:59.737939 kubelet[2963]: E0120 02:59:59.735137 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:00:01.976992 systemd[1]: Started sshd@57-10.0.0.129:22-10.0.0.1:40822.service - OpenSSH per-connection server daemon (10.0.0.1:40822). Jan 20 03:00:01.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@57-10.0.0.129:22-10.0.0.1:40822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:00:01.983919 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 03:00:01.984017 kernel: audit: type=1130 audit(1768878001.975:1186): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@57-10.0.0.129:22-10.0.0.1:40822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:00:02.386000 audit[7911]: USER_ACCT pid=7911 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:02.408159 sshd[7911]: Accepted publickey for core from 10.0.0.1 port 40822 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 03:00:02.416306 sshd-session[7911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:00:02.465120 kernel: audit: type=1101 audit(1768878002.386:1187): pid=7911 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:02.465259 kernel: audit: type=1103 audit(1768878002.413:1188): pid=7911 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:02.413000 audit[7911]: CRED_ACQ pid=7911 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:02.501294 systemd-logind[1612]: New session 58 of user core. Jan 20 03:00:02.555601 kernel: audit: type=1006 audit(1768878002.414:1189): pid=7911 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=58 res=1 Jan 20 03:00:02.558443 kernel: audit: type=1300 audit(1768878002.414:1189): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc8857bd40 a2=3 a3=0 items=0 ppid=1 pid=7911 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=58 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:00:02.414000 audit[7911]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc8857bd40 a2=3 a3=0 items=0 ppid=1 pid=7911 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=58 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:00:02.636447 kernel: audit: type=1327 audit(1768878002.414:1189): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:00:02.414000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:00:02.683694 systemd[1]: Started session-58.scope - Session 58 of User core. Jan 20 03:00:02.719000 audit[7911]: USER_START pid=7911 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:02.807919 kernel: audit: type=1105 audit(1768878002.719:1190): pid=7911 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:02.808034 kernel: audit: type=1103 audit(1768878002.729:1191): pid=7914 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:02.729000 audit[7914]: CRED_ACQ pid=7914 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:03.393557 sshd[7914]: Connection closed by 10.0.0.1 port 40822 Jan 20 03:00:03.399721 sshd-session[7911]: pam_unix(sshd:session): session closed for user core Jan 20 03:00:03.403000 audit[7911]: USER_END pid=7911 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:03.438425 systemd[1]: sshd@57-10.0.0.129:22-10.0.0.1:40822.service: Deactivated successfully. Jan 20 03:00:03.442777 systemd-logind[1612]: Session 58 logged out. Waiting for processes to exit. Jan 20 03:00:03.459142 kernel: audit: type=1106 audit(1768878003.403:1192): pid=7911 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:03.449452 systemd[1]: session-58.scope: Deactivated successfully. Jan 20 03:00:03.408000 audit[7911]: CRED_DISP pid=7911 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:03.494273 systemd-logind[1612]: Removed session 58. Jan 20 03:00:03.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@57-10.0.0.129:22-10.0.0.1:40822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:00:03.502278 kernel: audit: type=1104 audit(1768878003.408:1193): pid=7911 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:07.718951 kubelet[2963]: E0120 03:00:07.717595 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:00:07.750205 kubelet[2963]: E0120 03:00:07.747984 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 03:00:08.512892 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 03:00:08.513041 kernel: audit: type=1130 audit(1768878008.499:1195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@58-10.0.0.129:22-10.0.0.1:34934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:00:08.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@58-10.0.0.129:22-10.0.0.1:34934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:00:08.505940 systemd[1]: Started sshd@58-10.0.0.129:22-10.0.0.1:34934.service - OpenSSH per-connection server daemon (10.0.0.1:34934). Jan 20 03:00:08.759678 kubelet[2963]: E0120 03:00:08.759148 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 03:00:08.776678 kubelet[2963]: E0120 03:00:08.776065 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 03:00:09.012161 sshd[7930]: Accepted publickey for core from 10.0.0.1 port 34934 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 03:00:09.007000 audit[7930]: USER_ACCT pid=7930 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:09.077898 kernel: audit: type=1101 audit(1768878009.007:1196): pid=7930 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:09.081000 audit[7930]: CRED_ACQ pid=7930 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:09.084091 sshd-session[7930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:00:09.130949 systemd-logind[1612]: New session 59 of user core. Jan 20 03:00:09.212347 kernel: audit: type=1103 audit(1768878009.081:1197): pid=7930 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:09.212608 kernel: audit: type=1006 audit(1768878009.081:1198): pid=7930 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=59 res=1 Jan 20 03:00:09.212655 kernel: audit: type=1300 audit(1768878009.081:1198): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc59ca0780 a2=3 a3=0 items=0 ppid=1 pid=7930 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=59 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:00:09.081000 audit[7930]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc59ca0780 a2=3 a3=0 items=0 ppid=1 pid=7930 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=59 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:00:09.243675 kernel: audit: type=1327 audit(1768878009.081:1198): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:00:09.081000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:00:09.277058 systemd[1]: Started session-59.scope - Session 59 of User core. Jan 20 03:00:09.315000 audit[7930]: USER_START pid=7930 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:09.340619 kernel: audit: type=1105 audit(1768878009.315:1199): pid=7930 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:09.327000 audit[7933]: CRED_ACQ pid=7933 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:09.379711 kernel: audit: type=1103 audit(1768878009.327:1200): pid=7933 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:09.785718 kubelet[2963]: E0120 03:00:09.785575 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 03:00:10.541915 sshd[7933]: Connection closed by 10.0.0.1 port 34934 Jan 20 03:00:10.581440 sshd-session[7930]: pam_unix(sshd:session): session closed for user core Jan 20 03:00:10.598000 audit[7930]: USER_END pid=7930 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:10.615698 systemd[1]: sshd@58-10.0.0.129:22-10.0.0.1:34934.service: Deactivated successfully. Jan 20 03:00:10.670178 systemd[1]: session-59.scope: Deactivated successfully. Jan 20 03:00:10.686601 kernel: audit: type=1106 audit(1768878010.598:1201): pid=7930 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:10.686798 kernel: audit: type=1104 audit(1768878010.598:1202): pid=7930 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:10.598000 audit[7930]: CRED_DISP pid=7930 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:10.712204 systemd-logind[1612]: Session 59 logged out. Waiting for processes to exit. Jan 20 03:00:10.720906 systemd-logind[1612]: Removed session 59. Jan 20 03:00:10.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@58-10.0.0.129:22-10.0.0.1:34934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:00:11.737860 kubelet[2963]: E0120 03:00:11.731016 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 03:00:11.784067 kubelet[2963]: E0120 03:00:11.766903 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 03:00:15.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@59-10.0.0.129:22-10.0.0.1:48684 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:00:15.701234 systemd[1]: Started sshd@59-10.0.0.129:22-10.0.0.1:48684.service - OpenSSH per-connection server daemon (10.0.0.1:48684). Jan 20 03:00:15.719821 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 03:00:15.719974 kernel: audit: type=1130 audit(1768878015.698:1204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@59-10.0.0.129:22-10.0.0.1:48684 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:00:16.001000 audit[7956]: USER_ACCT pid=7956 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:16.010298 sshd[7956]: Accepted publickey for core from 10.0.0.1 port 48684 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 03:00:16.016770 sshd-session[7956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:00:16.027996 kernel: audit: type=1101 audit(1768878016.001:1205): pid=7956 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:16.028119 kernel: audit: type=1103 audit(1768878016.012:1206): pid=7956 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:16.012000 audit[7956]: CRED_ACQ pid=7956 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:16.062874 kernel: audit: type=1006 audit(1768878016.012:1207): pid=7956 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=60 res=1 Jan 20 03:00:16.063026 kernel: audit: type=1300 audit(1768878016.012:1207): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffceb24ba20 a2=3 a3=0 items=0 ppid=1 pid=7956 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=60 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:00:16.012000 audit[7956]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffceb24ba20 a2=3 a3=0 items=0 ppid=1 pid=7956 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=60 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:00:16.012000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:00:16.088323 kernel: audit: type=1327 audit(1768878016.012:1207): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:00:16.091265 systemd-logind[1612]: New session 60 of user core. Jan 20 03:00:16.116929 systemd[1]: Started session-60.scope - Session 60 of User core. Jan 20 03:00:16.146000 audit[7956]: USER_START pid=7956 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:16.187812 kernel: audit: type=1105 audit(1768878016.146:1208): pid=7956 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:16.187907 kernel: audit: type=1103 audit(1768878016.185:1209): pid=7959 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:16.185000 audit[7959]: CRED_ACQ pid=7959 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:16.794950 sshd[7959]: Connection closed by 10.0.0.1 port 48684 Jan 20 03:00:16.798276 sshd-session[7956]: pam_unix(sshd:session): session closed for user core Jan 20 03:00:16.806000 audit[7956]: USER_END pid=7956 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:16.818645 systemd-logind[1612]: Session 60 logged out. Waiting for processes to exit. Jan 20 03:00:16.824233 systemd[1]: sshd@59-10.0.0.129:22-10.0.0.1:48684.service: Deactivated successfully. Jan 20 03:00:16.827540 kernel: audit: type=1106 audit(1768878016.806:1210): pid=7956 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:16.808000 audit[7956]: CRED_DISP pid=7956 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:16.842446 systemd[1]: session-60.scope: Deactivated successfully. Jan 20 03:00:16.850264 kernel: audit: type=1104 audit(1768878016.808:1211): pid=7956 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:16.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@59-10.0.0.129:22-10.0.0.1:48684 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:00:16.860680 systemd-logind[1612]: Removed session 60. Jan 20 03:00:18.716413 kubelet[2963]: E0120 03:00:18.713457 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:00:21.735026 kubelet[2963]: E0120 03:00:21.732336 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 03:00:21.740276 kubelet[2963]: E0120 03:00:21.737144 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 03:00:21.857316 systemd[1]: Started sshd@60-10.0.0.129:22-10.0.0.1:48700.service - OpenSSH per-connection server daemon (10.0.0.1:48700). Jan 20 03:00:21.875557 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 03:00:21.877898 kernel: audit: type=1130 audit(1768878021.857:1213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@60-10.0.0.129:22-10.0.0.1:48700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:00:21.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@60-10.0.0.129:22-10.0.0.1:48700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:00:22.131000 audit[7972]: USER_ACCT pid=7972 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:22.136319 sshd[7972]: Accepted publickey for core from 10.0.0.1 port 48700 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 03:00:22.143232 sshd-session[7972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:00:22.175626 kernel: audit: type=1101 audit(1768878022.131:1214): pid=7972 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:22.176905 kernel: audit: type=1103 audit(1768878022.135:1215): pid=7972 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:22.135000 audit[7972]: CRED_ACQ pid=7972 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:22.193739 kernel: audit: type=1006 audit(1768878022.135:1216): pid=7972 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=61 res=1 Jan 20 03:00:22.187121 systemd-logind[1612]: New session 61 of user core. Jan 20 03:00:22.135000 audit[7972]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff6bbe640 a2=3 a3=0 items=0 ppid=1 pid=7972 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=61 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:00:22.225944 kernel: audit: type=1300 audit(1768878022.135:1216): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff6bbe640 a2=3 a3=0 items=0 ppid=1 pid=7972 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=61 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:00:22.227806 kernel: audit: type=1327 audit(1768878022.135:1216): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:00:22.135000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:00:22.247430 systemd[1]: Started session-61.scope - Session 61 of User core. Jan 20 03:00:22.273000 audit[7972]: USER_START pid=7972 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:22.358550 kernel: audit: type=1105 audit(1768878022.273:1217): pid=7972 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:22.358730 kernel: audit: type=1103 audit(1768878022.293:1218): pid=7975 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:22.293000 audit[7975]: CRED_ACQ pid=7975 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:22.723333 kubelet[2963]: E0120 03:00:22.723235 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 03:00:22.752100 sshd[7975]: Connection closed by 10.0.0.1 port 48700 Jan 20 03:00:22.754157 sshd-session[7972]: pam_unix(sshd:session): session closed for user core Jan 20 03:00:22.760000 audit[7972]: USER_END pid=7972 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:22.806955 kernel: audit: type=1106 audit(1768878022.760:1219): pid=7972 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:22.775641 systemd-logind[1612]: Session 61 logged out. Waiting for processes to exit. Jan 20 03:00:22.786342 systemd[1]: sshd@60-10.0.0.129:22-10.0.0.1:48700.service: Deactivated successfully. Jan 20 03:00:22.762000 audit[7972]: CRED_DISP pid=7972 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:22.813821 systemd[1]: session-61.scope: Deactivated successfully. Jan 20 03:00:22.834929 systemd-logind[1612]: Removed session 61. Jan 20 03:00:22.838574 kernel: audit: type=1104 audit(1768878022.762:1220): pid=7972 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:22.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@60-10.0.0.129:22-10.0.0.1:48700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:00:23.724571 kubelet[2963]: E0120 03:00:23.723858 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 03:00:25.738531 kubelet[2963]: E0120 03:00:25.738300 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 03:00:26.715082 kubelet[2963]: E0120 03:00:26.715026 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 03:00:27.879926 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 03:00:27.880070 kernel: audit: type=1130 audit(1768878027.823:1222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@61-10.0.0.129:22-10.0.0.1:56258 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:00:27.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@61-10.0.0.129:22-10.0.0.1:56258 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:00:27.824040 systemd[1]: Started sshd@61-10.0.0.129:22-10.0.0.1:56258.service - OpenSSH per-connection server daemon (10.0.0.1:56258). Jan 20 03:00:28.075315 sshd[7988]: Accepted publickey for core from 10.0.0.1 port 56258 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 03:00:28.074000 audit[7988]: USER_ACCT pid=7988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:28.080157 sshd-session[7988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:00:28.112663 kernel: audit: type=1101 audit(1768878028.074:1223): pid=7988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:28.075000 audit[7988]: CRED_ACQ pid=7988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:28.127764 systemd-logind[1612]: New session 62 of user core. Jan 20 03:00:28.163548 kernel: audit: type=1103 audit(1768878028.075:1224): pid=7988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:28.163742 kernel: audit: type=1006 audit(1768878028.075:1225): pid=7988 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=62 res=1 Jan 20 03:00:28.179759 kernel: audit: type=1300 audit(1768878028.075:1225): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffee012b590 a2=3 a3=0 items=0 ppid=1 pid=7988 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=62 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:00:28.075000 audit[7988]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffee012b590 a2=3 a3=0 items=0 ppid=1 pid=7988 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=62 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:00:28.188051 systemd[1]: Started session-62.scope - Session 62 of User core. Jan 20 03:00:28.075000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:00:28.208000 audit[7988]: USER_START pid=7988 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:28.308089 kernel: audit: type=1327 audit(1768878028.075:1225): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:00:28.308222 kernel: audit: type=1105 audit(1768878028.208:1226): pid=7988 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:28.308254 kernel: audit: type=1103 audit(1768878028.220:1227): pid=7991 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:28.220000 audit[7991]: CRED_ACQ pid=7991 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:28.750778 sshd[7991]: Connection closed by 10.0.0.1 port 56258 Jan 20 03:00:28.749189 sshd-session[7988]: pam_unix(sshd:session): session closed for user core Jan 20 03:00:28.754000 audit[7988]: USER_END pid=7988 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:28.765539 systemd-logind[1612]: Session 62 logged out. Waiting for processes to exit. Jan 20 03:00:28.772756 systemd[1]: sshd@61-10.0.0.129:22-10.0.0.1:56258.service: Deactivated successfully. Jan 20 03:00:28.793874 kernel: audit: type=1106 audit(1768878028.754:1228): pid=7988 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:28.793966 kernel: audit: type=1104 audit(1768878028.754:1229): pid=7988 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:28.754000 audit[7988]: CRED_DISP pid=7988 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:28.785568 systemd[1]: session-62.scope: Deactivated successfully. Jan 20 03:00:28.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@61-10.0.0.129:22-10.0.0.1:56258 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:00:28.821893 systemd-logind[1612]: Removed session 62. Jan 20 03:00:33.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@62-10.0.0.129:22-10.0.0.1:56260 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:00:33.832781 systemd[1]: Started sshd@62-10.0.0.129:22-10.0.0.1:56260.service - OpenSSH per-connection server daemon (10.0.0.1:56260). Jan 20 03:00:33.848716 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 03:00:33.848902 kernel: audit: type=1130 audit(1768878033.832:1231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@62-10.0.0.129:22-10.0.0.1:56260 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:00:34.454000 audit[8028]: USER_ACCT pid=8028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:34.458383 sshd[8028]: Accepted publickey for core from 10.0.0.1 port 56260 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 03:00:34.477248 sshd-session[8028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:00:34.459000 audit[8028]: CRED_ACQ pid=8028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:34.528818 systemd-logind[1612]: New session 63 of user core. Jan 20 03:00:34.561006 kernel: audit: type=1101 audit(1768878034.454:1232): pid=8028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:34.561161 kernel: audit: type=1103 audit(1768878034.459:1233): pid=8028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:34.596694 kernel: audit: type=1006 audit(1768878034.473:1234): pid=8028 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=63 res=1 Jan 20 03:00:34.596851 kernel: audit: type=1300 audit(1768878034.473:1234): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcfba25580 a2=3 a3=0 items=0 ppid=1 pid=8028 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=63 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:00:34.473000 audit[8028]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcfba25580 a2=3 a3=0 items=0 ppid=1 pid=8028 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=63 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:00:34.640749 kernel: audit: type=1327 audit(1768878034.473:1234): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:00:34.473000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:00:34.645975 systemd[1]: Started session-63.scope - Session 63 of User core. Jan 20 03:00:34.663000 audit[8028]: USER_START pid=8028 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:34.685000 audit[8031]: CRED_ACQ pid=8031 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:34.743033 kubelet[2963]: E0120 03:00:34.727324 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 03:00:34.756730 kernel: audit: type=1105 audit(1768878034.663:1235): pid=8028 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:34.757405 kernel: audit: type=1103 audit(1768878034.685:1236): pid=8031 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:35.262305 sshd[8031]: Connection closed by 10.0.0.1 port 56260 Jan 20 03:00:35.266622 sshd-session[8028]: pam_unix(sshd:session): session closed for user core Jan 20 03:00:35.268000 audit[8028]: USER_END pid=8028 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:35.338576 kernel: audit: type=1106 audit(1768878035.268:1237): pid=8028 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:35.338762 kernel: audit: type=1104 audit(1768878035.268:1238): pid=8028 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:35.268000 audit[8028]: CRED_DISP pid=8028 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:35.373437 systemd[1]: sshd@62-10.0.0.129:22-10.0.0.1:56260.service: Deactivated successfully. Jan 20 03:00:35.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@62-10.0.0.129:22-10.0.0.1:56260 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:00:35.387791 systemd[1]: session-63.scope: Deactivated successfully. Jan 20 03:00:35.402083 systemd-logind[1612]: Session 63 logged out. Waiting for processes to exit. Jan 20 03:00:35.431082 systemd[1]: Started sshd@63-10.0.0.129:22-10.0.0.1:43616.service - OpenSSH per-connection server daemon (10.0.0.1:43616). Jan 20 03:00:35.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@63-10.0.0.129:22-10.0.0.1:43616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:00:35.443039 systemd-logind[1612]: Removed session 63. Jan 20 03:00:35.686016 sshd[8045]: Accepted publickey for core from 10.0.0.1 port 43616 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 03:00:35.685000 audit[8045]: USER_ACCT pid=8045 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:35.696000 audit[8045]: CRED_ACQ pid=8045 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:35.698000 audit[8045]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd7076320 a2=3 a3=0 items=0 ppid=1 pid=8045 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=64 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:00:35.698000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:00:35.701250 sshd-session[8045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:00:35.729049 kubelet[2963]: E0120 03:00:35.728769 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 03:00:35.733003 kubelet[2963]: E0120 03:00:35.731034 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 03:00:35.777947 systemd-logind[1612]: New session 64 of user core. Jan 20 03:00:35.794035 systemd[1]: Started session-64.scope - Session 64 of User core. Jan 20 03:00:35.827000 audit[8045]: USER_START pid=8045 uid=0 auid=500 ses=64 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:35.833000 audit[8048]: CRED_ACQ pid=8048 uid=0 auid=500 ses=64 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:36.724190 kubelet[2963]: E0120 03:00:36.722564 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 03:00:37.482464 sshd[8048]: Connection closed by 10.0.0.1 port 43616 Jan 20 03:00:37.496262 sshd-session[8045]: pam_unix(sshd:session): session closed for user core Jan 20 03:00:37.497000 audit[8045]: USER_END pid=8045 uid=0 auid=500 ses=64 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:37.508000 audit[8045]: CRED_DISP pid=8045 uid=0 auid=500 ses=64 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:37.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@63-10.0.0.129:22-10.0.0.1:43616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:00:37.545648 systemd[1]: sshd@63-10.0.0.129:22-10.0.0.1:43616.service: Deactivated successfully. Jan 20 03:00:37.564227 systemd[1]: session-64.scope: Deactivated successfully. Jan 20 03:00:37.569134 systemd-logind[1612]: Session 64 logged out. Waiting for processes to exit. Jan 20 03:00:37.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@64-10.0.0.129:22-10.0.0.1:43628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:00:37.590100 systemd[1]: Started sshd@64-10.0.0.129:22-10.0.0.1:43628.service - OpenSSH per-connection server daemon (10.0.0.1:43628). Jan 20 03:00:37.600811 systemd-logind[1612]: Removed session 64. Jan 20 03:00:37.903105 sshd[8060]: Accepted publickey for core from 10.0.0.1 port 43628 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 03:00:37.896000 audit[8060]: USER_ACCT pid=8060 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:37.904000 audit[8060]: CRED_ACQ pid=8060 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:37.904000 audit[8060]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0c8492e0 a2=3 a3=0 items=0 ppid=1 pid=8060 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=65 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:00:37.904000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:00:37.907198 sshd-session[8060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:00:37.927187 systemd-logind[1612]: New session 65 of user core. Jan 20 03:00:37.956829 systemd[1]: Started session-65.scope - Session 65 of User core. Jan 20 03:00:37.965000 audit[8060]: USER_START pid=8060 uid=0 auid=500 ses=65 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:37.967000 audit[8064]: CRED_ACQ pid=8064 uid=0 auid=500 ses=65 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:39.745921 kubelet[2963]: E0120 03:00:39.744799 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 03:00:39.750336 kubelet[2963]: E0120 03:00:39.749682 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 03:00:41.412684 sshd[8064]: Connection closed by 10.0.0.1 port 43628 Jan 20 03:00:41.419171 sshd-session[8060]: pam_unix(sshd:session): session closed for user core Jan 20 03:00:41.479603 kernel: kauditd_printk_skb: 20 callbacks suppressed Jan 20 03:00:41.479803 kernel: audit: type=1106 audit(1768878041.420:1255): pid=8060 uid=0 auid=500 ses=65 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:41.420000 audit[8060]: USER_END pid=8060 uid=0 auid=500 ses=65 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:41.425000 audit[8060]: CRED_DISP pid=8060 uid=0 auid=500 ses=65 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:41.513184 kernel: audit: type=1104 audit(1768878041.425:1256): pid=8060 uid=0 auid=500 ses=65 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:41.557469 kernel: audit: type=1325 audit(1768878041.401:1257): table=filter:134 family=2 entries=26 op=nft_register_rule pid=8081 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 03:00:41.401000 audit[8081]: NETFILTER_CFG table=filter:134 family=2 entries=26 op=nft_register_rule pid=8081 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 03:00:41.541858 systemd[1]: sshd@64-10.0.0.129:22-10.0.0.1:43628.service: Deactivated successfully. Jan 20 03:00:41.555950 systemd[1]: session-65.scope: Deactivated successfully. Jan 20 03:00:41.401000 audit[8081]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffd80082f90 a2=0 a3=7ffd80082f7c items=0 ppid=3069 pid=8081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:00:41.574959 systemd-logind[1612]: Session 65 logged out. Waiting for processes to exit. Jan 20 03:00:41.600895 systemd[1]: Started sshd@65-10.0.0.129:22-10.0.0.1:43642.service - OpenSSH per-connection server daemon (10.0.0.1:43642). Jan 20 03:00:41.604584 systemd-logind[1612]: Removed session 65. Jan 20 03:00:41.614272 kernel: audit: type=1300 audit(1768878041.401:1257): arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffd80082f90 a2=0 a3=7ffd80082f7c items=0 ppid=3069 pid=8081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:00:41.614382 kernel: audit: type=1327 audit(1768878041.401:1257): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 03:00:41.401000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 03:00:41.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@64-10.0.0.129:22-10.0.0.1:43628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:00:41.656695 kernel: audit: type=1131 audit(1768878041.542:1258): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@64-10.0.0.129:22-10.0.0.1:43628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:00:41.656840 kernel: audit: type=1325 audit(1768878041.561:1259): table=nat:135 family=2 entries=20 op=nft_register_rule pid=8081 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 03:00:41.561000 audit[8081]: NETFILTER_CFG table=nat:135 family=2 entries=20 op=nft_register_rule pid=8081 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 03:00:41.687787 kernel: audit: type=1300 audit(1768878041.561:1259): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd80082f90 a2=0 a3=0 items=0 ppid=3069 pid=8081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:00:41.561000 audit[8081]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd80082f90 a2=0 a3=0 items=0 ppid=3069 pid=8081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:00:41.561000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 03:00:41.758895 kernel: audit: type=1327 audit(1768878041.561:1259): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 03:00:41.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@65-10.0.0.129:22-10.0.0.1:43642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:00:41.800316 kernel: audit: type=1130 audit(1768878041.596:1260): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@65-10.0.0.129:22-10.0.0.1:43642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:00:42.002000 audit[8086]: USER_ACCT pid=8086 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:42.005432 sshd[8086]: Accepted publickey for core from 10.0.0.1 port 43642 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 03:00:42.009000 audit[8086]: CRED_ACQ pid=8086 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:42.009000 audit[8086]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd6b17020 a2=3 a3=0 items=0 ppid=1 pid=8086 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=66 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:00:42.009000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:00:42.010423 sshd-session[8086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:00:42.047906 systemd-logind[1612]: New session 66 of user core. Jan 20 03:00:42.064044 systemd[1]: Started session-66.scope - Session 66 of User core. Jan 20 03:00:42.087000 audit[8086]: USER_START pid=8086 uid=0 auid=500 ses=66 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:42.095000 audit[8089]: CRED_ACQ pid=8089 uid=0 auid=500 ses=66 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:42.735000 audit[8097]: NETFILTER_CFG table=filter:136 family=2 entries=38 op=nft_register_rule pid=8097 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 03:00:42.735000 audit[8097]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff94915a50 a2=0 a3=7fff94915a3c items=0 ppid=3069 pid=8097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:00:42.735000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 03:00:42.749000 audit[8097]: NETFILTER_CFG table=nat:137 family=2 entries=20 op=nft_register_rule pid=8097 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 03:00:42.749000 audit[8097]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff94915a50 a2=0 a3=0 items=0 ppid=3069 pid=8097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:00:42.749000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 03:00:43.401775 sshd[8089]: Connection closed by 10.0.0.1 port 43642 Jan 20 03:00:43.404906 sshd-session[8086]: pam_unix(sshd:session): session closed for user core Jan 20 03:00:43.430000 audit[8086]: USER_END pid=8086 uid=0 auid=500 ses=66 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:43.430000 audit[8086]: CRED_DISP pid=8086 uid=0 auid=500 ses=66 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:43.469955 systemd-logind[1612]: Session 66 logged out. Waiting for processes to exit. Jan 20 03:00:43.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@66-10.0.0.129:22-10.0.0.1:43652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:00:43.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@65-10.0.0.129:22-10.0.0.1:43642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:00:43.507174 systemd[1]: Started sshd@66-10.0.0.129:22-10.0.0.1:43652.service - OpenSSH per-connection server daemon (10.0.0.1:43652). Jan 20 03:00:43.508234 systemd[1]: sshd@65-10.0.0.129:22-10.0.0.1:43642.service: Deactivated successfully. Jan 20 03:00:43.534133 systemd[1]: session-66.scope: Deactivated successfully. Jan 20 03:00:43.556469 systemd-logind[1612]: Removed session 66. Jan 20 03:00:43.758000 audit[8099]: USER_ACCT pid=8099 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:43.763065 sshd[8099]: Accepted publickey for core from 10.0.0.1 port 43652 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 03:00:43.762000 audit[8099]: CRED_ACQ pid=8099 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:43.763000 audit[8099]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff48d02310 a2=3 a3=0 items=0 ppid=1 pid=8099 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=67 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:00:43.763000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:00:43.766818 sshd-session[8099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:00:43.830323 systemd-logind[1612]: New session 67 of user core. Jan 20 03:00:43.848630 systemd[1]: Started session-67.scope - Session 67 of User core. Jan 20 03:00:43.864000 audit[8099]: USER_START pid=8099 uid=0 auid=500 ses=67 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:43.880000 audit[8105]: CRED_ACQ pid=8105 uid=0 auid=500 ses=67 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:44.349996 sshd[8105]: Connection closed by 10.0.0.1 port 43652 Jan 20 03:00:44.349406 sshd-session[8099]: pam_unix(sshd:session): session closed for user core Jan 20 03:00:44.366000 audit[8099]: USER_END pid=8099 uid=0 auid=500 ses=67 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:44.368000 audit[8099]: CRED_DISP pid=8099 uid=0 auid=500 ses=67 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:44.386232 systemd[1]: sshd@66-10.0.0.129:22-10.0.0.1:43652.service: Deactivated successfully. Jan 20 03:00:44.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@66-10.0.0.129:22-10.0.0.1:43652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:00:44.394246 systemd[1]: session-67.scope: Deactivated successfully. Jan 20 03:00:44.402232 systemd-logind[1612]: Session 67 logged out. Waiting for processes to exit. Jan 20 03:00:44.409327 systemd-logind[1612]: Removed session 67. Jan 20 03:00:47.788289 kubelet[2963]: E0120 03:00:47.777374 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 03:00:47.788289 kubelet[2963]: E0120 03:00:47.785129 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 03:00:49.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@67-10.0.0.129:22-10.0.0.1:40412 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:00:49.448005 systemd[1]: Started sshd@67-10.0.0.129:22-10.0.0.1:40412.service - OpenSSH per-connection server daemon (10.0.0.1:40412). Jan 20 03:00:49.462002 kernel: kauditd_printk_skb: 27 callbacks suppressed Jan 20 03:00:49.462191 kernel: audit: type=1130 audit(1768878049.447:1280): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@67-10.0.0.129:22-10.0.0.1:40412 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:00:49.692252 sshd[8118]: Accepted publickey for core from 10.0.0.1 port 40412 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 03:00:49.691000 audit[8118]: USER_ACCT pid=8118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:49.721102 sshd-session[8118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:00:49.729825 kernel: audit: type=1101 audit(1768878049.691:1281): pid=8118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:49.790591 kernel: audit: type=1103 audit(1768878049.713:1282): pid=8118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:49.713000 audit[8118]: CRED_ACQ pid=8118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:49.805685 systemd-logind[1612]: New session 68 of user core. Jan 20 03:00:49.809940 kubelet[2963]: E0120 03:00:49.807337 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 03:00:49.862019 kernel: audit: type=1006 audit(1768878049.716:1283): pid=8118 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=68 res=1 Jan 20 03:00:49.864053 kernel: audit: type=1300 audit(1768878049.716:1283): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef6142900 a2=3 a3=0 items=0 ppid=1 pid=8118 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=68 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:00:49.716000 audit[8118]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef6142900 a2=3 a3=0 items=0 ppid=1 pid=8118 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=68 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:00:49.855936 systemd[1]: Started session-68.scope - Session 68 of User core. Jan 20 03:00:49.716000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:00:49.898761 kernel: audit: type=1327 audit(1768878049.716:1283): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:00:49.890000 audit[8118]: USER_START pid=8118 uid=0 auid=500 ses=68 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:49.943687 kernel: audit: type=1105 audit(1768878049.890:1284): pid=8118 uid=0 auid=500 ses=68 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:49.898000 audit[8121]: CRED_ACQ pid=8121 uid=0 auid=500 ses=68 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:49.987802 kernel: audit: type=1103 audit(1768878049.898:1285): pid=8121 uid=0 auid=500 ses=68 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:50.355656 sshd[8121]: Connection closed by 10.0.0.1 port 40412 Jan 20 03:00:50.357853 sshd-session[8118]: pam_unix(sshd:session): session closed for user core Jan 20 03:00:50.360000 audit[8118]: USER_END pid=8118 uid=0 auid=500 ses=68 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:50.380009 systemd[1]: sshd@67-10.0.0.129:22-10.0.0.1:40412.service: Deactivated successfully. Jan 20 03:00:50.384676 systemd[1]: session-68.scope: Deactivated successfully. Jan 20 03:00:50.389640 systemd-logind[1612]: Session 68 logged out. Waiting for processes to exit. Jan 20 03:00:50.397170 kernel: audit: type=1106 audit(1768878050.360:1286): pid=8118 uid=0 auid=500 ses=68 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:50.401339 systemd-logind[1612]: Removed session 68. Jan 20 03:00:50.360000 audit[8118]: CRED_DISP pid=8118 uid=0 auid=500 ses=68 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:50.441692 kernel: audit: type=1104 audit(1768878050.360:1287): pid=8118 uid=0 auid=500 ses=68 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:50.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@67-10.0.0.129:22-10.0.0.1:40412 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:00:50.723145 kubelet[2963]: E0120 03:00:50.719714 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 03:00:50.726426 kubelet[2963]: E0120 03:00:50.726330 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 03:00:50.751356 kubelet[2963]: E0120 03:00:50.750351 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 03:00:54.718228 kubelet[2963]: E0120 03:00:54.718177 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:00:54.719986 kubelet[2963]: E0120 03:00:54.719427 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:00:55.402170 systemd[1]: Started sshd@68-10.0.0.129:22-10.0.0.1:60978.service - OpenSSH per-connection server daemon (10.0.0.1:60978). Jan 20 03:00:55.434454 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 03:00:55.434997 kernel: audit: type=1130 audit(1768878055.401:1289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@68-10.0.0.129:22-10.0.0.1:60978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:00:55.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@68-10.0.0.129:22-10.0.0.1:60978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:00:55.716000 audit[8134]: USER_ACCT pid=8134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:55.719828 sshd[8134]: Accepted publickey for core from 10.0.0.1 port 60978 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 03:00:55.732918 sshd-session[8134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:00:55.776598 systemd-logind[1612]: New session 69 of user core. Jan 20 03:00:55.784073 kernel: audit: type=1101 audit(1768878055.716:1290): pid=8134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:55.724000 audit[8134]: CRED_ACQ pid=8134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:55.816816 kernel: audit: type=1103 audit(1768878055.724:1291): pid=8134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:55.816962 kernel: audit: type=1006 audit(1768878055.724:1292): pid=8134 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=69 res=1 Jan 20 03:00:55.839445 kernel: audit: type=1300 audit(1768878055.724:1292): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe234ead20 a2=3 a3=0 items=0 ppid=1 pid=8134 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=69 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:00:55.724000 audit[8134]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe234ead20 a2=3 a3=0 items=0 ppid=1 pid=8134 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=69 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:00:55.724000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:00:55.849639 kernel: audit: type=1327 audit(1768878055.724:1292): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:00:55.871856 systemd[1]: Started session-69.scope - Session 69 of User core. Jan 20 03:00:55.946000 audit[8134]: USER_START pid=8134 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:56.043884 kernel: audit: type=1105 audit(1768878055.946:1293): pid=8134 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:56.039000 audit[8137]: CRED_ACQ pid=8137 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:56.068807 kernel: audit: type=1103 audit(1768878056.039:1294): pid=8137 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:56.685910 sshd[8137]: Connection closed by 10.0.0.1 port 60978 Jan 20 03:00:56.683795 sshd-session[8134]: pam_unix(sshd:session): session closed for user core Jan 20 03:00:56.706000 audit[8134]: USER_END pid=8134 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:56.724096 systemd-logind[1612]: Session 69 logged out. Waiting for processes to exit. Jan 20 03:00:56.725666 systemd[1]: sshd@68-10.0.0.129:22-10.0.0.1:60978.service: Deactivated successfully. Jan 20 03:00:56.738986 systemd[1]: session-69.scope: Deactivated successfully. Jan 20 03:00:56.769944 systemd-logind[1612]: Removed session 69. Jan 20 03:00:56.709000 audit[8134]: CRED_DISP pid=8134 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:56.797551 kernel: audit: type=1106 audit(1768878056.706:1295): pid=8134 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:56.797661 kernel: audit: type=1104 audit(1768878056.709:1296): pid=8134 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:00:56.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@68-10.0.0.129:22-10.0.0.1:60978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:00:59.734962 kubelet[2963]: E0120 03:00:59.733264 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:01:00.718087 kubelet[2963]: E0120 03:01:00.717683 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 03:01:01.716636 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 03:01:01.716770 kernel: audit: type=1130 audit(1768878061.707:1298): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@69-10.0.0.129:22-10.0.0.1:60982 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:01:01.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@69-10.0.0.129:22-10.0.0.1:60982 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:01:01.708007 systemd[1]: Started sshd@69-10.0.0.129:22-10.0.0.1:60982.service - OpenSSH per-connection server daemon (10.0.0.1:60982). Jan 20 03:01:01.735690 kubelet[2963]: E0120 03:01:01.732756 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 03:01:01.750759 kubelet[2963]: E0120 03:01:01.750709 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 03:01:01.872860 sshd[8174]: Accepted publickey for core from 10.0.0.1 port 60982 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 03:01:01.869000 audit[8174]: USER_ACCT pid=8174 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:01.882105 sshd-session[8174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:01:01.880000 audit[8174]: CRED_ACQ pid=8174 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:01.911158 systemd-logind[1612]: New session 70 of user core. Jan 20 03:01:01.934299 kernel: audit: type=1101 audit(1768878061.869:1299): pid=8174 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:01.934413 kernel: audit: type=1103 audit(1768878061.880:1300): pid=8174 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:01.934809 kernel: audit: type=1006 audit(1768878061.880:1301): pid=8174 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=70 res=1 Jan 20 03:01:01.880000 audit[8174]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa9c915d0 a2=3 a3=0 items=0 ppid=1 pid=8174 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=70 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:01:01.958915 systemd[1]: Started session-70.scope - Session 70 of User core. Jan 20 03:01:01.988906 kernel: audit: type=1300 audit(1768878061.880:1301): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa9c915d0 a2=3 a3=0 items=0 ppid=1 pid=8174 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=70 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:01:01.880000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:01:01.978000 audit[8174]: USER_START pid=8174 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:02.044681 kernel: audit: type=1327 audit(1768878061.880:1301): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:01:02.044823 kernel: audit: type=1105 audit(1768878061.978:1302): pid=8174 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:02.047884 kernel: audit: type=1103 audit(1768878061.985:1303): pid=8179 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:01.985000 audit[8179]: CRED_ACQ pid=8179 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:02.413248 sshd[8179]: Connection closed by 10.0.0.1 port 60982 Jan 20 03:01:02.415911 sshd-session[8174]: pam_unix(sshd:session): session closed for user core Jan 20 03:01:02.437000 audit[8174]: USER_END pid=8174 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:02.469938 systemd-logind[1612]: Session 70 logged out. Waiting for processes to exit. Jan 20 03:01:02.437000 audit[8174]: CRED_DISP pid=8174 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:02.478655 systemd[1]: sshd@69-10.0.0.129:22-10.0.0.1:60982.service: Deactivated successfully. Jan 20 03:01:02.485762 systemd[1]: session-70.scope: Deactivated successfully. Jan 20 03:01:02.499181 systemd-logind[1612]: Removed session 70. Jan 20 03:01:02.508391 kernel: audit: type=1106 audit(1768878062.437:1304): pid=8174 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:02.509054 kernel: audit: type=1104 audit(1768878062.437:1305): pid=8174 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:02.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@69-10.0.0.129:22-10.0.0.1:60982 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:01:04.489330 kubelet[2963]: E0120 03:01:04.430735 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 03:01:04.718978 kubelet[2963]: E0120 03:01:04.718918 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 03:01:05.747795 kubelet[2963]: E0120 03:01:05.746111 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 03:01:06.716448 kubelet[2963]: E0120 03:01:06.715330 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:01:06.716448 kubelet[2963]: E0120 03:01:06.716308 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:01:08.607987 systemd[1]: Started sshd@70-10.0.0.129:22-10.0.0.1:48520.service - OpenSSH per-connection server daemon (10.0.0.1:48520). Jan 20 03:01:08.668166 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 03:01:08.668285 kernel: audit: type=1130 audit(1768878068.606:1307): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@70-10.0.0.129:22-10.0.0.1:48520 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:01:08.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@70-10.0.0.129:22-10.0.0.1:48520 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:01:09.030000 audit[8197]: USER_ACCT pid=8197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:09.041182 sshd[8197]: Accepted publickey for core from 10.0.0.1 port 48520 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 03:01:09.052002 sshd-session[8197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:01:09.100560 kernel: audit: type=1101 audit(1768878069.030:1308): pid=8197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:09.118025 systemd-logind[1612]: New session 71 of user core. Jan 20 03:01:09.041000 audit[8197]: CRED_ACQ pid=8197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:09.155530 kernel: audit: type=1103 audit(1768878069.041:1309): pid=8197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:09.165840 systemd[1]: Started session-71.scope - Session 71 of User core. Jan 20 03:01:09.209545 kernel: audit: type=1006 audit(1768878069.041:1310): pid=8197 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=71 res=1 Jan 20 03:01:09.041000 audit[8197]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffb9a01dc0 a2=3 a3=0 items=0 ppid=1 pid=8197 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=71 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:01:09.238041 kernel: audit: type=1300 audit(1768878069.041:1310): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffb9a01dc0 a2=3 a3=0 items=0 ppid=1 pid=8197 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=71 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:01:09.041000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:01:09.274570 kernel: audit: type=1327 audit(1768878069.041:1310): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:01:09.274725 kernel: audit: type=1105 audit(1768878069.193:1311): pid=8197 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:09.193000 audit[8197]: USER_START pid=8197 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:09.297586 kernel: audit: type=1103 audit(1768878069.210:1312): pid=8201 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:09.210000 audit[8201]: CRED_ACQ pid=8201 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:09.727131 sshd[8201]: Connection closed by 10.0.0.1 port 48520 Jan 20 03:01:09.729810 sshd-session[8197]: pam_unix(sshd:session): session closed for user core Jan 20 03:01:09.733000 audit[8197]: USER_END pid=8197 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:09.750665 systemd[1]: sshd@70-10.0.0.129:22-10.0.0.1:48520.service: Deactivated successfully. Jan 20 03:01:09.756469 systemd[1]: session-71.scope: Deactivated successfully. Jan 20 03:01:09.763022 systemd-logind[1612]: Session 71 logged out. Waiting for processes to exit. Jan 20 03:01:09.767827 systemd-logind[1612]: Removed session 71. Jan 20 03:01:09.775006 kernel: audit: type=1106 audit(1768878069.733:1313): pid=8197 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:09.733000 audit[8197]: CRED_DISP pid=8197 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:09.797543 kernel: audit: type=1104 audit(1768878069.733:1314): pid=8197 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:09.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@70-10.0.0.129:22-10.0.0.1:48520 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:01:10.716544 kubelet[2963]: E0120 03:01:10.716004 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:01:14.723523 kubelet[2963]: E0120 03:01:14.717980 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 03:01:14.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@71-10.0.0.129:22-10.0.0.1:58188 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:01:14.782036 systemd[1]: Started sshd@71-10.0.0.129:22-10.0.0.1:58188.service - OpenSSH per-connection server daemon (10.0.0.1:58188). Jan 20 03:01:14.794227 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 03:01:14.794347 kernel: audit: type=1130 audit(1768878074.781:1316): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@71-10.0.0.129:22-10.0.0.1:58188 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:01:15.404000 audit[8215]: USER_ACCT pid=8215 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:15.411048 sshd[8215]: Accepted publickey for core from 10.0.0.1 port 58188 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 03:01:15.416301 sshd-session[8215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:01:15.406000 audit[8215]: CRED_ACQ pid=8215 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:15.462665 systemd-logind[1612]: New session 72 of user core. Jan 20 03:01:15.487062 kernel: audit: type=1101 audit(1768878075.404:1317): pid=8215 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:15.487265 kernel: audit: type=1103 audit(1768878075.406:1318): pid=8215 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:15.487314 kernel: audit: type=1006 audit(1768878075.406:1319): pid=8215 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=72 res=1 Jan 20 03:01:15.520742 kernel: audit: type=1300 audit(1768878075.406:1319): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffddf4ca600 a2=3 a3=0 items=0 ppid=1 pid=8215 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=72 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:01:15.406000 audit[8215]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffddf4ca600 a2=3 a3=0 items=0 ppid=1 pid=8215 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=72 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:01:15.575935 kernel: audit: type=1327 audit(1768878075.406:1319): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:01:15.406000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:01:15.572330 systemd[1]: Started session-72.scope - Session 72 of User core. Jan 20 03:01:15.602000 audit[8215]: USER_START pid=8215 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:15.661540 kernel: audit: type=1105 audit(1768878075.602:1320): pid=8215 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:15.661707 kernel: audit: type=1103 audit(1768878075.611:1321): pid=8220 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:15.611000 audit[8220]: CRED_ACQ pid=8220 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:15.767718 kubelet[2963]: E0120 03:01:15.766018 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 03:01:16.080198 sshd[8220]: Connection closed by 10.0.0.1 port 58188 Jan 20 03:01:16.081259 sshd-session[8215]: pam_unix(sshd:session): session closed for user core Jan 20 03:01:16.092000 audit[8215]: USER_END pid=8215 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:16.113958 systemd[1]: sshd@71-10.0.0.129:22-10.0.0.1:58188.service: Deactivated successfully. Jan 20 03:01:16.122919 systemd[1]: session-72.scope: Deactivated successfully. Jan 20 03:01:16.126144 systemd-logind[1612]: Session 72 logged out. Waiting for processes to exit. Jan 20 03:01:16.094000 audit[8215]: CRED_DISP pid=8215 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:16.158589 systemd-logind[1612]: Removed session 72. Jan 20 03:01:16.185993 kernel: audit: type=1106 audit(1768878076.092:1322): pid=8215 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:16.186140 kernel: audit: type=1104 audit(1768878076.094:1323): pid=8215 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:16.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@71-10.0.0.129:22-10.0.0.1:58188 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:01:16.729708 kubelet[2963]: E0120 03:01:16.726236 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 03:01:16.754083 kubelet[2963]: E0120 03:01:16.752580 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 03:01:18.724557 kubelet[2963]: E0120 03:01:18.721780 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 03:01:19.743403 containerd[1640]: time="2026-01-20T03:01:19.742573174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 03:01:19.890552 containerd[1640]: time="2026-01-20T03:01:19.889782162Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:01:19.895655 containerd[1640]: time="2026-01-20T03:01:19.894813638Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 03:01:19.895655 containerd[1640]: time="2026-01-20T03:01:19.895414491Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 03:01:19.896018 kubelet[2963]: E0120 03:01:19.895975 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 03:01:19.897206 kubelet[2963]: E0120 03:01:19.896610 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 03:01:19.897206 kubelet[2963]: E0120 03:01:19.896722 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-68fcfd7799-l9qd2_calico-system(ea0ad3c0-ee09-401c-8807-5b06e8d22025): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 03:01:19.900859 containerd[1640]: time="2026-01-20T03:01:19.900590797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 03:01:19.990273 containerd[1640]: time="2026-01-20T03:01:19.990177281Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:01:19.995440 containerd[1640]: time="2026-01-20T03:01:19.994905959Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 03:01:19.995440 containerd[1640]: time="2026-01-20T03:01:19.995015923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 03:01:19.995649 kubelet[2963]: E0120 03:01:19.995416 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 03:01:19.995649 kubelet[2963]: E0120 03:01:19.995468 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 03:01:19.995649 kubelet[2963]: E0120 03:01:19.995598 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-68fcfd7799-l9qd2_calico-system(ea0ad3c0-ee09-401c-8807-5b06e8d22025): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 03:01:19.995803 kubelet[2963]: E0120 03:01:19.995649 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 03:01:21.109681 systemd[1]: Started sshd@72-10.0.0.129:22-10.0.0.1:58196.service - OpenSSH per-connection server daemon (10.0.0.1:58196). Jan 20 03:01:21.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@72-10.0.0.129:22-10.0.0.1:58196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:01:21.116025 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 03:01:21.116154 kernel: audit: type=1130 audit(1768878081.108:1325): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@72-10.0.0.129:22-10.0.0.1:58196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:01:21.346000 audit[8233]: USER_ACCT pid=8233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:21.360811 sshd[8233]: Accepted publickey for core from 10.0.0.1 port 58196 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 03:01:21.371539 sshd-session[8233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:01:21.364000 audit[8233]: CRED_ACQ pid=8233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:21.414190 kernel: audit: type=1101 audit(1768878081.346:1326): pid=8233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:21.414290 kernel: audit: type=1103 audit(1768878081.364:1327): pid=8233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:21.414646 kernel: audit: type=1006 audit(1768878081.369:1328): pid=8233 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=73 res=1 Jan 20 03:01:21.424413 systemd-logind[1612]: New session 73 of user core. Jan 20 03:01:21.369000 audit[8233]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe5a41b8c0 a2=3 a3=0 items=0 ppid=1 pid=8233 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=73 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:01:21.369000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:01:21.468962 kernel: audit: type=1300 audit(1768878081.369:1328): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe5a41b8c0 a2=3 a3=0 items=0 ppid=1 pid=8233 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=73 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:01:21.469116 kernel: audit: type=1327 audit(1768878081.369:1328): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:01:21.491981 systemd[1]: Started session-73.scope - Session 73 of User core. Jan 20 03:01:21.511000 audit[8233]: USER_START pid=8233 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:21.595542 kernel: audit: type=1105 audit(1768878081.511:1329): pid=8233 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:21.595995 kernel: audit: type=1103 audit(1768878081.522:1330): pid=8236 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:21.522000 audit[8236]: CRED_ACQ pid=8236 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:22.043729 sshd[8236]: Connection closed by 10.0.0.1 port 58196 Jan 20 03:01:22.045777 sshd-session[8233]: pam_unix(sshd:session): session closed for user core Jan 20 03:01:22.057000 audit[8233]: USER_END pid=8233 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:22.057000 audit[8233]: CRED_DISP pid=8233 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:22.095823 kernel: audit: type=1106 audit(1768878082.057:1331): pid=8233 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:22.096456 kernel: audit: type=1104 audit(1768878082.057:1332): pid=8233 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:22.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@72-10.0.0.129:22-10.0.0.1:58196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:01:22.091943 systemd[1]: sshd@72-10.0.0.129:22-10.0.0.1:58196.service: Deactivated successfully. Jan 20 03:01:22.112825 systemd[1]: session-73.scope: Deactivated successfully. Jan 20 03:01:22.122974 systemd-logind[1612]: Session 73 logged out. Waiting for processes to exit. Jan 20 03:01:22.135213 systemd-logind[1612]: Removed session 73. Jan 20 03:01:23.727259 kubelet[2963]: E0120 03:01:23.724383 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:01:27.074751 systemd[1]: Started sshd@73-10.0.0.129:22-10.0.0.1:36206.service - OpenSSH per-connection server daemon (10.0.0.1:36206). Jan 20 03:01:27.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@73-10.0.0.129:22-10.0.0.1:36206 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:01:27.084172 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 03:01:27.084727 kernel: audit: type=1130 audit(1768878087.073:1334): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@73-10.0.0.129:22-10.0.0.1:36206 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:01:27.271000 audit[8249]: USER_ACCT pid=8249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:27.293430 sshd[8249]: Accepted publickey for core from 10.0.0.1 port 36206 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 03:01:27.293944 kernel: audit: type=1101 audit(1768878087.271:1335): pid=8249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:27.296000 audit[8249]: CRED_ACQ pid=8249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:27.300943 sshd-session[8249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:01:27.312781 kernel: audit: type=1103 audit(1768878087.296:1336): pid=8249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:27.312926 kernel: audit: type=1006 audit(1768878087.296:1337): pid=8249 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=74 res=1 Jan 20 03:01:27.335769 kernel: audit: type=1300 audit(1768878087.296:1337): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff3b18f30 a2=3 a3=0 items=0 ppid=1 pid=8249 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=74 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:01:27.296000 audit[8249]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff3b18f30 a2=3 a3=0 items=0 ppid=1 pid=8249 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=74 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:01:27.335149 systemd-logind[1612]: New session 74 of user core. Jan 20 03:01:27.296000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:01:27.344756 kernel: audit: type=1327 audit(1768878087.296:1337): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:01:27.362860 systemd[1]: Started session-74.scope - Session 74 of User core. Jan 20 03:01:27.378000 audit[8249]: USER_START pid=8249 uid=0 auid=500 ses=74 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:27.386000 audit[8252]: CRED_ACQ pid=8252 uid=0 auid=500 ses=74 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:27.430547 kernel: audit: type=1105 audit(1768878087.378:1338): pid=8249 uid=0 auid=500 ses=74 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:27.430717 kernel: audit: type=1103 audit(1768878087.386:1339): pid=8252 uid=0 auid=500 ses=74 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:27.786077 sshd[8252]: Connection closed by 10.0.0.1 port 36206 Jan 20 03:01:27.787756 sshd-session[8249]: pam_unix(sshd:session): session closed for user core Jan 20 03:01:27.790000 audit[8249]: USER_END pid=8249 uid=0 auid=500 ses=74 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:27.803855 systemd-logind[1612]: Session 74 logged out. Waiting for processes to exit. Jan 20 03:01:27.809537 systemd[1]: sshd@73-10.0.0.129:22-10.0.0.1:36206.service: Deactivated successfully. Jan 20 03:01:27.822788 systemd[1]: session-74.scope: Deactivated successfully. Jan 20 03:01:27.844128 systemd-logind[1612]: Removed session 74. Jan 20 03:01:27.790000 audit[8249]: CRED_DISP pid=8249 uid=0 auid=500 ses=74 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:27.881402 kernel: audit: type=1106 audit(1768878087.790:1340): pid=8249 uid=0 auid=500 ses=74 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:27.881880 kernel: audit: type=1104 audit(1768878087.790:1341): pid=8249 uid=0 auid=500 ses=74 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:27.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@73-10.0.0.129:22-10.0.0.1:36206 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:01:28.716569 kubelet[2963]: E0120 03:01:28.716435 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:01:28.728227 kubelet[2963]: E0120 03:01:28.727852 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 03:01:31.277353 containerd[1640]: time="2026-01-20T03:01:31.277086672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 03:01:31.305564 kubelet[2963]: E0120 03:01:31.287035 2963 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.513s" Jan 20 03:01:31.418589 kubelet[2963]: E0120 03:01:31.418402 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 03:01:31.518431 containerd[1640]: time="2026-01-20T03:01:31.517657095Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:01:31.540878 containerd[1640]: time="2026-01-20T03:01:31.539836483Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 03:01:31.540878 containerd[1640]: time="2026-01-20T03:01:31.540036805Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 03:01:31.544831 kubelet[2963]: E0120 03:01:31.540598 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 03:01:31.544831 kubelet[2963]: E0120 03:01:31.540832 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 03:01:31.548620 kubelet[2963]: E0120 03:01:31.548089 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-554b6967f8-4mv9r_calico-system(9eab50e8-9c7c-4942-9bf1-628e8f6481c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 03:01:31.548620 kubelet[2963]: E0120 03:01:31.548237 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 03:01:31.558794 containerd[1640]: time="2026-01-20T03:01:31.554953306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 03:01:31.685892 containerd[1640]: time="2026-01-20T03:01:31.685466380Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:01:31.730738 containerd[1640]: time="2026-01-20T03:01:31.727543196Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 03:01:31.731638 containerd[1640]: time="2026-01-20T03:01:31.730926780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 03:01:31.743108 kubelet[2963]: E0120 03:01:31.738893 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 03:01:31.762387 kubelet[2963]: E0120 03:01:31.760439 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 03:01:31.762387 kubelet[2963]: E0120 03:01:31.760823 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-zb7gt_calico-system(2beb3373-3a79-403b-953d-80d6dc35b793): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 03:01:31.762661 containerd[1640]: time="2026-01-20T03:01:31.761866145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 03:01:31.883592 containerd[1640]: time="2026-01-20T03:01:31.883418083Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:01:31.891331 containerd[1640]: time="2026-01-20T03:01:31.888978794Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 03:01:31.891331 containerd[1640]: time="2026-01-20T03:01:31.889090932Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 03:01:31.891639 kubelet[2963]: E0120 03:01:31.889844 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 03:01:31.891639 kubelet[2963]: E0120 03:01:31.890008 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 03:01:31.891639 kubelet[2963]: E0120 03:01:31.890208 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-99b79f8fd-9fwc6_calico-apiserver(67615726-cef8-44da-a26c-7795f613fcbb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 03:01:31.891639 kubelet[2963]: E0120 03:01:31.890258 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 03:01:31.893342 containerd[1640]: time="2026-01-20T03:01:31.892741621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 03:01:32.006965 containerd[1640]: time="2026-01-20T03:01:32.006844909Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:01:32.013627 containerd[1640]: time="2026-01-20T03:01:32.012057706Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 03:01:32.013627 containerd[1640]: time="2026-01-20T03:01:32.012160917Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 03:01:32.013743 kubelet[2963]: E0120 03:01:32.013301 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 03:01:32.013743 kubelet[2963]: E0120 03:01:32.013447 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 03:01:32.019358 containerd[1640]: time="2026-01-20T03:01:32.015175114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 03:01:32.019664 kubelet[2963]: E0120 03:01:32.019076 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-5hks8_calico-system(2048147f-559b-4756-8896-b644ce0ae95e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 03:01:32.019664 kubelet[2963]: E0120 03:01:32.019124 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 03:01:32.165055 containerd[1640]: time="2026-01-20T03:01:32.153570076Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:01:32.189042 containerd[1640]: time="2026-01-20T03:01:32.188751428Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 03:01:32.190950 containerd[1640]: time="2026-01-20T03:01:32.190841134Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 03:01:32.192455 kubelet[2963]: E0120 03:01:32.191615 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 03:01:32.192455 kubelet[2963]: E0120 03:01:32.191734 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 03:01:32.192455 kubelet[2963]: E0120 03:01:32.191898 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-zb7gt_calico-system(2beb3373-3a79-403b-953d-80d6dc35b793): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 03:01:32.192455 kubelet[2963]: E0120 03:01:32.191951 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 03:01:32.844940 systemd[1]: Started sshd@74-10.0.0.129:22-10.0.0.1:36212.service - OpenSSH per-connection server daemon (10.0.0.1:36212). Jan 20 03:01:32.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@74-10.0.0.129:22-10.0.0.1:36212 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:01:32.866400 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 03:01:32.866560 kernel: audit: type=1130 audit(1768878092.844:1343): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@74-10.0.0.129:22-10.0.0.1:36212 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:01:33.207000 audit[8298]: USER_ACCT pid=8298 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:33.235201 sshd-session[8298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:01:33.237455 sshd[8298]: Accepted publickey for core from 10.0.0.1 port 36212 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 03:01:33.244624 kernel: audit: type=1101 audit(1768878093.207:1344): pid=8298 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:33.316807 kernel: audit: type=1103 audit(1768878093.229:1345): pid=8298 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:33.229000 audit[8298]: CRED_ACQ pid=8298 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:33.340617 systemd-logind[1612]: New session 75 of user core. Jan 20 03:01:33.348226 kernel: audit: type=1006 audit(1768878093.229:1346): pid=8298 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=75 res=1 Jan 20 03:01:33.229000 audit[8298]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcbfa8a430 a2=3 a3=0 items=0 ppid=1 pid=8298 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=75 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:01:33.387377 kernel: audit: type=1300 audit(1768878093.229:1346): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcbfa8a430 a2=3 a3=0 items=0 ppid=1 pid=8298 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=75 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:01:33.414422 kernel: audit: type=1327 audit(1768878093.229:1346): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:01:33.229000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:01:33.411942 systemd[1]: Started session-75.scope - Session 75 of User core. Jan 20 03:01:33.444000 audit[8298]: USER_START pid=8298 uid=0 auid=500 ses=75 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:33.494737 kernel: audit: type=1105 audit(1768878093.444:1347): pid=8298 uid=0 auid=500 ses=75 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:33.505000 audit[8301]: CRED_ACQ pid=8301 uid=0 auid=500 ses=75 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:33.540954 kernel: audit: type=1103 audit(1768878093.505:1348): pid=8301 uid=0 auid=500 ses=75 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:34.472352 sshd[8301]: Connection closed by 10.0.0.1 port 36212 Jan 20 03:01:34.489774 sshd-session[8298]: pam_unix(sshd:session): session closed for user core Jan 20 03:01:34.491000 audit[8298]: USER_END pid=8298 uid=0 auid=500 ses=75 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:34.516459 systemd-logind[1612]: Session 75 logged out. Waiting for processes to exit. Jan 20 03:01:34.519110 systemd[1]: sshd@74-10.0.0.129:22-10.0.0.1:36212.service: Deactivated successfully. Jan 20 03:01:34.536077 kernel: audit: type=1106 audit(1768878094.491:1349): pid=8298 uid=0 auid=500 ses=75 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:34.492000 audit[8298]: CRED_DISP pid=8298 uid=0 auid=500 ses=75 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:34.576763 systemd[1]: session-75.scope: Deactivated successfully. Jan 20 03:01:34.593418 kernel: audit: type=1104 audit(1768878094.492:1350): pid=8298 uid=0 auid=500 ses=75 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:34.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@74-10.0.0.129:22-10.0.0.1:36212 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:01:34.689733 systemd-logind[1612]: Removed session 75. Jan 20 03:01:39.524023 systemd[1]: Started sshd@75-10.0.0.129:22-10.0.0.1:36058.service - OpenSSH per-connection server daemon (10.0.0.1:36058). Jan 20 03:01:39.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@75-10.0.0.129:22-10.0.0.1:36058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:01:39.555597 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 03:01:39.555833 kernel: audit: type=1130 audit(1768878099.522:1352): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@75-10.0.0.129:22-10.0.0.1:36058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:01:39.721936 containerd[1640]: time="2026-01-20T03:01:39.721460348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 03:01:39.727000 audit[8330]: USER_ACCT pid=8330 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:39.743454 sshd[8330]: Accepted publickey for core from 10.0.0.1 port 36058 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 03:01:39.744402 sshd-session[8330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:01:39.742000 audit[8330]: CRED_ACQ pid=8330 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:39.777999 kernel: audit: type=1101 audit(1768878099.727:1353): pid=8330 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:39.778387 kernel: audit: type=1103 audit(1768878099.742:1354): pid=8330 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:39.775770 systemd-logind[1612]: New session 76 of user core. Jan 20 03:01:39.802586 kernel: audit: type=1006 audit(1768878099.742:1355): pid=8330 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=76 res=1 Jan 20 03:01:39.742000 audit[8330]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc58ef600 a2=3 a3=0 items=0 ppid=1 pid=8330 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=76 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:01:39.805177 systemd[1]: Started session-76.scope - Session 76 of User core. Jan 20 03:01:39.835269 kernel: audit: type=1300 audit(1768878099.742:1355): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc58ef600 a2=3 a3=0 items=0 ppid=1 pid=8330 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=76 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:01:39.835412 kernel: audit: type=1327 audit(1768878099.742:1355): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:01:39.742000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:01:39.813000 audit[8330]: USER_START pid=8330 uid=0 auid=500 ses=76 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:39.863752 kernel: audit: type=1105 audit(1768878099.813:1356): pid=8330 uid=0 auid=500 ses=76 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:39.863896 kernel: audit: type=1103 audit(1768878099.818:1357): pid=8333 uid=0 auid=500 ses=76 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:39.818000 audit[8333]: CRED_ACQ pid=8333 uid=0 auid=500 ses=76 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:39.864119 containerd[1640]: time="2026-01-20T03:01:39.861088743Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:01:39.918964 containerd[1640]: time="2026-01-20T03:01:39.918461953Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 03:01:39.918964 containerd[1640]: time="2026-01-20T03:01:39.918659298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 03:01:39.921272 kubelet[2963]: E0120 03:01:39.921151 2963 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 03:01:39.925727 kubelet[2963]: E0120 03:01:39.921871 2963 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 03:01:39.926000 kubelet[2963]: E0120 03:01:39.925884 2963 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-99b79f8fd-h8mhs_calico-apiserver(78de0405-4f44-497e-8007-519223ee3a61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 03:01:39.926000 kubelet[2963]: E0120 03:01:39.925948 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 03:01:40.415763 sshd[8333]: Connection closed by 10.0.0.1 port 36058 Jan 20 03:01:40.418754 sshd-session[8330]: pam_unix(sshd:session): session closed for user core Jan 20 03:01:40.441000 audit[8330]: USER_END pid=8330 uid=0 auid=500 ses=76 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:40.466441 kernel: audit: type=1106 audit(1768878100.441:1358): pid=8330 uid=0 auid=500 ses=76 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:40.463879 systemd-logind[1612]: Session 76 logged out. Waiting for processes to exit. Jan 20 03:01:40.441000 audit[8330]: CRED_DISP pid=8330 uid=0 auid=500 ses=76 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:40.478777 systemd[1]: sshd@75-10.0.0.129:22-10.0.0.1:36058.service: Deactivated successfully. Jan 20 03:01:40.489211 kernel: audit: type=1104 audit(1768878100.441:1359): pid=8330 uid=0 auid=500 ses=76 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:40.490044 systemd[1]: session-76.scope: Deactivated successfully. Jan 20 03:01:40.497124 systemd-logind[1612]: Removed session 76. Jan 20 03:01:40.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@75-10.0.0.129:22-10.0.0.1:36058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:01:43.537293 kubelet[2963]: E0120 03:01:43.535569 2963 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.817s" Jan 20 03:01:44.107048 kubelet[2963]: E0120 03:01:44.095390 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 03:01:45.999951 kubelet[2963]: E0120 03:01:45.998897 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 03:01:46.090288 kubelet[2963]: E0120 03:01:46.084054 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 03:01:46.100799 systemd[1]: Started sshd@76-10.0.0.129:22-10.0.0.1:40290.service - OpenSSH per-connection server daemon (10.0.0.1:40290). Jan 20 03:01:46.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@76-10.0.0.129:22-10.0.0.1:40290 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:01:46.110281 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 03:01:46.110427 kernel: audit: type=1130 audit(1768878106.098:1361): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@76-10.0.0.129:22-10.0.0.1:40290 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:01:46.113370 kubelet[2963]: E0120 03:01:46.111458 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 03:01:46.126141 kubelet[2963]: E0120 03:01:46.126085 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 03:01:46.496000 audit[8348]: USER_ACCT pid=8348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:46.501152 sshd[8348]: Accepted publickey for core from 10.0.0.1 port 40290 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 03:01:46.513678 sshd-session[8348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:01:46.542826 kernel: audit: type=1101 audit(1768878106.496:1362): pid=8348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:46.554836 systemd-logind[1612]: New session 77 of user core. Jan 20 03:01:46.504000 audit[8348]: CRED_ACQ pid=8348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:46.579166 systemd[1]: Started session-77.scope - Session 77 of User core. Jan 20 03:01:46.632820 kernel: audit: type=1103 audit(1768878106.504:1363): pid=8348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:46.763583 kernel: audit: type=1006 audit(1768878106.504:1364): pid=8348 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=77 res=1 Jan 20 03:01:46.771993 kernel: audit: type=1300 audit(1768878106.504:1364): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe80202fc0 a2=3 a3=0 items=0 ppid=1 pid=8348 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=77 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:01:46.809817 kernel: audit: type=1327 audit(1768878106.504:1364): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:01:46.504000 audit[8348]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe80202fc0 a2=3 a3=0 items=0 ppid=1 pid=8348 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=77 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:01:46.504000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:01:46.857000 audit[8348]: USER_START pid=8348 uid=0 auid=500 ses=77 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:46.877000 audit[8357]: CRED_ACQ pid=8357 uid=0 auid=500 ses=77 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:46.905451 kernel: audit: type=1105 audit(1768878106.857:1365): pid=8348 uid=0 auid=500 ses=77 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:46.905662 kernel: audit: type=1103 audit(1768878106.877:1366): pid=8357 uid=0 auid=500 ses=77 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:47.423733 sshd[8357]: Connection closed by 10.0.0.1 port 40290 Jan 20 03:01:47.423063 sshd-session[8348]: pam_unix(sshd:session): session closed for user core Jan 20 03:01:47.434000 audit[8348]: USER_END pid=8348 uid=0 auid=500 ses=77 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:47.452378 systemd[1]: sshd@76-10.0.0.129:22-10.0.0.1:40290.service: Deactivated successfully. Jan 20 03:01:47.464373 systemd[1]: session-77.scope: Deactivated successfully. Jan 20 03:01:47.466664 systemd-logind[1612]: Session 77 logged out. Waiting for processes to exit. Jan 20 03:01:47.474538 kernel: audit: type=1106 audit(1768878107.434:1367): pid=8348 uid=0 auid=500 ses=77 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:47.474909 systemd-logind[1612]: Removed session 77. Jan 20 03:01:47.437000 audit[8348]: CRED_DISP pid=8348 uid=0 auid=500 ses=77 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:47.523166 kernel: audit: type=1104 audit(1768878107.437:1368): pid=8348 uid=0 auid=500 ses=77 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:47.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@76-10.0.0.129:22-10.0.0.1:40290 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:01:50.771000 audit[8370]: NETFILTER_CFG table=filter:138 family=2 entries=26 op=nft_register_rule pid=8370 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 03:01:50.771000 audit[8370]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffec412d520 a2=0 a3=7ffec412d50c items=0 ppid=3069 pid=8370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:01:50.771000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 03:01:50.802000 audit[8370]: NETFILTER_CFG table=nat:139 family=2 entries=104 op=nft_register_chain pid=8370 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 03:01:50.802000 audit[8370]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffec412d520 a2=0 a3=7ffec412d50c items=0 ppid=3069 pid=8370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:01:50.802000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 03:01:52.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@77-10.0.0.129:22-10.0.0.1:40294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:01:52.459553 systemd[1]: Started sshd@77-10.0.0.129:22-10.0.0.1:40294.service - OpenSSH per-connection server daemon (10.0.0.1:40294). Jan 20 03:01:52.479701 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 20 03:01:52.479877 kernel: audit: type=1130 audit(1768878112.459:1372): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@77-10.0.0.129:22-10.0.0.1:40294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:01:52.647000 audit[8372]: USER_ACCT pid=8372 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:52.660570 sshd[8372]: Accepted publickey for core from 10.0.0.1 port 40294 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 03:01:52.663680 sshd-session[8372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:01:52.674580 kernel: audit: type=1101 audit(1768878112.647:1373): pid=8372 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:52.674689 kernel: audit: type=1103 audit(1768878112.658:1374): pid=8372 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:52.658000 audit[8372]: CRED_ACQ pid=8372 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:52.701581 systemd-logind[1612]: New session 78 of user core. Jan 20 03:01:52.731988 kubelet[2963]: E0120 03:01:52.727765 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 03:01:52.735263 kernel: audit: type=1006 audit(1768878112.658:1375): pid=8372 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=78 res=1 Jan 20 03:01:52.733083 systemd[1]: Started session-78.scope - Session 78 of User core. Jan 20 03:01:52.658000 audit[8372]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4ff8edb0 a2=3 a3=0 items=0 ppid=1 pid=8372 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=78 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:01:52.658000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:01:52.805455 kernel: audit: type=1300 audit(1768878112.658:1375): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4ff8edb0 a2=3 a3=0 items=0 ppid=1 pid=8372 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=78 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:01:52.805634 kernel: audit: type=1327 audit(1768878112.658:1375): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:01:52.805663 kernel: audit: type=1105 audit(1768878112.770:1376): pid=8372 uid=0 auid=500 ses=78 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:52.770000 audit[8372]: USER_START pid=8372 uid=0 auid=500 ses=78 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:52.861965 kernel: audit: type=1103 audit(1768878112.782:1377): pid=8375 uid=0 auid=500 ses=78 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:52.782000 audit[8375]: CRED_ACQ pid=8375 uid=0 auid=500 ses=78 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:53.308927 sshd[8375]: Connection closed by 10.0.0.1 port 40294 Jan 20 03:01:53.313617 sshd-session[8372]: pam_unix(sshd:session): session closed for user core Jan 20 03:01:53.384787 kernel: audit: type=1106 audit(1768878113.330:1378): pid=8372 uid=0 auid=500 ses=78 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:53.330000 audit[8372]: USER_END pid=8372 uid=0 auid=500 ses=78 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:53.356439 systemd-logind[1612]: Session 78 logged out. Waiting for processes to exit. Jan 20 03:01:53.357944 systemd[1]: sshd@77-10.0.0.129:22-10.0.0.1:40294.service: Deactivated successfully. Jan 20 03:01:53.369347 systemd[1]: session-78.scope: Deactivated successfully. Jan 20 03:01:53.387519 systemd-logind[1612]: Removed session 78. Jan 20 03:01:53.330000 audit[8372]: CRED_DISP pid=8372 uid=0 auid=500 ses=78 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:53.415714 kernel: audit: type=1104 audit(1768878113.330:1379): pid=8372 uid=0 auid=500 ses=78 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:53.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@77-10.0.0.129:22-10.0.0.1:40294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:01:58.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@78-10.0.0.129:22-10.0.0.1:38140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:01:58.359280 systemd[1]: Started sshd@78-10.0.0.129:22-10.0.0.1:38140.service - OpenSSH per-connection server daemon (10.0.0.1:38140). Jan 20 03:01:58.369581 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 03:01:58.369749 kernel: audit: type=1130 audit(1768878118.357:1381): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@78-10.0.0.129:22-10.0.0.1:38140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:01:58.752161 kubelet[2963]: E0120 03:01:58.745930 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 03:01:58.763293 kubelet[2963]: E0120 03:01:58.761094 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 03:01:58.768914 kubelet[2963]: E0120 03:01:58.768746 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 03:01:58.868000 audit[8388]: USER_ACCT pid=8388 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:58.877878 sshd-session[8388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:01:58.901794 sshd[8388]: Accepted publickey for core from 10.0.0.1 port 38140 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 03:01:58.873000 audit[8388]: CRED_ACQ pid=8388 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:58.915155 systemd-logind[1612]: New session 79 of user core. Jan 20 03:01:58.924700 kernel: audit: type=1101 audit(1768878118.868:1382): pid=8388 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:58.924796 kernel: audit: type=1103 audit(1768878118.873:1383): pid=8388 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:58.924868 kernel: audit: type=1006 audit(1768878118.873:1384): pid=8388 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=79 res=1 Jan 20 03:01:58.958578 kernel: audit: type=1300 audit(1768878118.873:1384): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff781e790 a2=3 a3=0 items=0 ppid=1 pid=8388 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=79 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:01:58.873000 audit[8388]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff781e790 a2=3 a3=0 items=0 ppid=1 pid=8388 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=79 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:01:58.873000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:01:59.016797 systemd[1]: Started session-79.scope - Session 79 of User core. Jan 20 03:01:59.026616 kernel: audit: type=1327 audit(1768878118.873:1384): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:01:59.032000 audit[8388]: USER_START pid=8388 uid=0 auid=500 ses=79 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:59.062662 kernel: audit: type=1105 audit(1768878119.032:1385): pid=8388 uid=0 auid=500 ses=79 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:59.062820 kernel: audit: type=1103 audit(1768878119.039:1386): pid=8391 uid=0 auid=500 ses=79 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:59.039000 audit[8391]: CRED_ACQ pid=8391 uid=0 auid=500 ses=79 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:59.348960 sshd[8391]: Connection closed by 10.0.0.1 port 38140 Jan 20 03:01:59.345895 sshd-session[8388]: pam_unix(sshd:session): session closed for user core Jan 20 03:01:59.354000 audit[8388]: USER_END pid=8388 uid=0 auid=500 ses=79 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:59.364395 systemd[1]: sshd@78-10.0.0.129:22-10.0.0.1:38140.service: Deactivated successfully. Jan 20 03:01:59.377002 systemd[1]: session-79.scope: Deactivated successfully. Jan 20 03:01:59.391576 systemd-logind[1612]: Session 79 logged out. Waiting for processes to exit. Jan 20 03:01:59.355000 audit[8388]: CRED_DISP pid=8388 uid=0 auid=500 ses=79 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:59.397012 systemd-logind[1612]: Removed session 79. Jan 20 03:01:59.427305 kernel: audit: type=1106 audit(1768878119.354:1387): pid=8388 uid=0 auid=500 ses=79 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:59.427451 kernel: audit: type=1104 audit(1768878119.355:1388): pid=8388 uid=0 auid=500 ses=79 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:01:59.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@78-10.0.0.129:22-10.0.0.1:38140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:01:59.738373 kubelet[2963]: E0120 03:01:59.733582 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 03:02:00.720978 kubelet[2963]: E0120 03:02:00.720348 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 03:02:04.395564 systemd[1]: Started sshd@79-10.0.0.129:22-10.0.0.1:38142.service - OpenSSH per-connection server daemon (10.0.0.1:38142). Jan 20 03:02:04.461766 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 03:02:04.462038 kernel: audit: type=1130 audit(1768878124.395:1390): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@79-10.0.0.129:22-10.0.0.1:38142 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:02:04.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@79-10.0.0.129:22-10.0.0.1:38142 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:02:04.709551 kernel: audit: type=1101 audit(1768878124.667:1391): pid=8432 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:04.667000 audit[8432]: USER_ACCT pid=8432 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:04.705426 systemd-logind[1612]: New session 80 of user core. Jan 20 03:02:04.710641 sshd[8432]: Accepted publickey for core from 10.0.0.1 port 38142 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 03:02:04.675455 sshd-session[8432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:02:04.674000 audit[8432]: CRED_ACQ pid=8432 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:04.796256 kernel: audit: type=1103 audit(1768878124.674:1392): pid=8432 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:04.796401 kernel: audit: type=1006 audit(1768878124.674:1393): pid=8432 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=80 res=1 Jan 20 03:02:04.674000 audit[8432]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe82b3cff0 a2=3 a3=0 items=0 ppid=1 pid=8432 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=80 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:02:04.812228 systemd[1]: Started session-80.scope - Session 80 of User core. Jan 20 03:02:04.844596 kernel: audit: type=1300 audit(1768878124.674:1393): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe82b3cff0 a2=3 a3=0 items=0 ppid=1 pid=8432 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=80 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:02:04.674000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:02:04.853818 kernel: audit: type=1327 audit(1768878124.674:1393): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:02:04.836000 audit[8432]: USER_START pid=8432 uid=0 auid=500 ses=80 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:04.879643 kernel: audit: type=1105 audit(1768878124.836:1394): pid=8432 uid=0 auid=500 ses=80 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:04.849000 audit[8435]: CRED_ACQ pid=8435 uid=0 auid=500 ses=80 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:04.906939 kernel: audit: type=1103 audit(1768878124.849:1395): pid=8435 uid=0 auid=500 ses=80 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:05.443441 sshd[8435]: Connection closed by 10.0.0.1 port 38142 Jan 20 03:02:05.450896 sshd-session[8432]: pam_unix(sshd:session): session closed for user core Jan 20 03:02:05.452000 audit[8432]: USER_END pid=8432 uid=0 auid=500 ses=80 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:05.478660 systemd[1]: sshd@79-10.0.0.129:22-10.0.0.1:38142.service: Deactivated successfully. Jan 20 03:02:05.488134 systemd[1]: session-80.scope: Deactivated successfully. Jan 20 03:02:05.492593 systemd-logind[1612]: Session 80 logged out. Waiting for processes to exit. Jan 20 03:02:05.505552 systemd-logind[1612]: Removed session 80. Jan 20 03:02:05.452000 audit[8432]: CRED_DISP pid=8432 uid=0 auid=500 ses=80 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:05.522969 kernel: audit: type=1106 audit(1768878125.452:1396): pid=8432 uid=0 auid=500 ses=80 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:05.523136 kernel: audit: type=1104 audit(1768878125.452:1397): pid=8432 uid=0 auid=500 ses=80 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:05.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@79-10.0.0.129:22-10.0.0.1:38142 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:02:06.722258 kubelet[2963]: E0120 03:02:06.720221 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 03:02:08.727187 kubelet[2963]: E0120 03:02:08.721808 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:02:09.725639 kubelet[2963]: E0120 03:02:09.723779 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb" Jan 20 03:02:10.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@80-10.0.0.129:22-10.0.0.1:44512 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:02:10.506820 systemd[1]: Started sshd@80-10.0.0.129:22-10.0.0.1:44512.service - OpenSSH per-connection server daemon (10.0.0.1:44512). Jan 20 03:02:10.517116 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 03:02:10.517221 kernel: audit: type=1130 audit(1768878130.503:1399): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@80-10.0.0.129:22-10.0.0.1:44512 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:02:10.748168 kubelet[2963]: E0120 03:02:10.746347 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68fcfd7799-l9qd2" podUID="ea0ad3c0-ee09-401c-8807-5b06e8d22025" Jan 20 03:02:10.792000 audit[8450]: USER_ACCT pid=8450 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:10.802800 sshd[8450]: Accepted publickey for core from 10.0.0.1 port 44512 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 03:02:10.808361 sshd-session[8450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:02:10.835582 kernel: audit: type=1101 audit(1768878130.792:1400): pid=8450 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:10.835700 kernel: audit: type=1103 audit(1768878130.806:1401): pid=8450 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:10.806000 audit[8450]: CRED_ACQ pid=8450 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:10.829906 systemd-logind[1612]: New session 81 of user core. Jan 20 03:02:10.806000 audit[8450]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcf683a900 a2=3 a3=0 items=0 ppid=1 pid=8450 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=81 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:02:10.849925 systemd[1]: Started session-81.scope - Session 81 of User core. Jan 20 03:02:10.881022 kernel: audit: type=1006 audit(1768878130.806:1402): pid=8450 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=81 res=1 Jan 20 03:02:10.881215 kernel: audit: type=1300 audit(1768878130.806:1402): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcf683a900 a2=3 a3=0 items=0 ppid=1 pid=8450 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=81 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:02:10.806000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:02:10.890974 kernel: audit: type=1327 audit(1768878130.806:1402): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:02:10.876000 audit[8450]: USER_START pid=8450 uid=0 auid=500 ses=81 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:10.922688 kernel: audit: type=1105 audit(1768878130.876:1403): pid=8450 uid=0 auid=500 ses=81 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:10.882000 audit[8453]: CRED_ACQ pid=8453 uid=0 auid=500 ses=81 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:10.947563 kernel: audit: type=1103 audit(1768878130.882:1404): pid=8453 uid=0 auid=500 ses=81 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:11.348210 sshd[8453]: Connection closed by 10.0.0.1 port 44512 Jan 20 03:02:11.347766 sshd-session[8450]: pam_unix(sshd:session): session closed for user core Jan 20 03:02:11.365000 audit[8450]: USER_END pid=8450 uid=0 auid=500 ses=81 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:11.384110 systemd-logind[1612]: Session 81 logged out. Waiting for processes to exit. Jan 20 03:02:11.396197 systemd[1]: sshd@80-10.0.0.129:22-10.0.0.1:44512.service: Deactivated successfully. Jan 20 03:02:11.419849 systemd[1]: session-81.scope: Deactivated successfully. Jan 20 03:02:11.442167 systemd-logind[1612]: Removed session 81. Jan 20 03:02:11.365000 audit[8450]: CRED_DISP pid=8450 uid=0 auid=500 ses=81 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:11.492598 kernel: audit: type=1106 audit(1768878131.365:1405): pid=8450 uid=0 auid=500 ses=81 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:11.492761 kernel: audit: type=1104 audit(1768878131.365:1406): pid=8450 uid=0 auid=500 ses=81 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:11.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@80-10.0.0.129:22-10.0.0.1:44512 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:02:12.722245 kubelet[2963]: E0120 03:02:12.720972 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:02:12.735231 kubelet[2963]: E0120 03:02:12.733160 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-554b6967f8-4mv9r" podUID="9eab50e8-9c7c-4942-9bf1-628e8f6481c8" Jan 20 03:02:13.730872 kubelet[2963]: E0120 03:02:13.726008 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5hks8" podUID="2048147f-559b-4756-8896-b644ce0ae95e" Jan 20 03:02:13.740004 kubelet[2963]: E0120 03:02:13.739449 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zb7gt" podUID="2beb3373-3a79-403b-953d-80d6dc35b793" Jan 20 03:02:16.387109 systemd[1]: Started sshd@81-10.0.0.129:22-10.0.0.1:38800.service - OpenSSH per-connection server daemon (10.0.0.1:38800). Jan 20 03:02:16.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@81-10.0.0.129:22-10.0.0.1:38800 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:02:16.415695 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 03:02:16.415822 kernel: audit: type=1130 audit(1768878136.386:1408): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@81-10.0.0.129:22-10.0.0.1:38800 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:02:16.599104 sshd[8467]: Accepted publickey for core from 10.0.0.1 port 38800 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 03:02:16.595000 audit[8467]: USER_ACCT pid=8467 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:16.620736 kernel: audit: type=1101 audit(1768878136.595:1409): pid=8467 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:16.617000 audit[8467]: CRED_ACQ pid=8467 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:16.625569 sshd-session[8467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:02:16.638752 kernel: audit: type=1103 audit(1768878136.617:1410): pid=8467 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:16.650852 kernel: audit: type=1006 audit(1768878136.618:1411): pid=8467 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=82 res=1 Jan 20 03:02:16.618000 audit[8467]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffbf9e2100 a2=3 a3=0 items=0 ppid=1 pid=8467 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=82 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:02:16.618000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:02:16.687814 kernel: audit: type=1300 audit(1768878136.618:1411): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffbf9e2100 a2=3 a3=0 items=0 ppid=1 pid=8467 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=82 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:02:16.687905 kernel: audit: type=1327 audit(1768878136.618:1411): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:02:16.708799 systemd-logind[1612]: New session 82 of user core. Jan 20 03:02:16.729249 systemd[1]: Started session-82.scope - Session 82 of User core. Jan 20 03:02:16.762000 audit[8467]: USER_START pid=8467 uid=0 auid=500 ses=82 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:16.791690 kernel: audit: type=1105 audit(1768878136.762:1412): pid=8467 uid=0 auid=500 ses=82 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:16.776000 audit[8470]: CRED_ACQ pid=8470 uid=0 auid=500 ses=82 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:16.804609 kernel: audit: type=1103 audit(1768878136.776:1413): pid=8470 uid=0 auid=500 ses=82 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:17.222281 sshd[8470]: Connection closed by 10.0.0.1 port 38800 Jan 20 03:02:17.222750 sshd-session[8467]: pam_unix(sshd:session): session closed for user core Jan 20 03:02:17.228000 audit[8467]: USER_END pid=8467 uid=0 auid=500 ses=82 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:17.249796 systemd-logind[1612]: Session 82 logged out. Waiting for processes to exit. Jan 20 03:02:17.255092 systemd[1]: sshd@81-10.0.0.129:22-10.0.0.1:38800.service: Deactivated successfully. Jan 20 03:02:17.272704 systemd[1]: session-82.scope: Deactivated successfully. Jan 20 03:02:17.277215 kernel: audit: type=1106 audit(1768878137.228:1414): pid=8467 uid=0 auid=500 ses=82 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:17.229000 audit[8467]: CRED_DISP pid=8467 uid=0 auid=500 ses=82 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:17.295075 systemd-logind[1612]: Removed session 82. Jan 20 03:02:17.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@81-10.0.0.129:22-10.0.0.1:38800 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:02:17.312434 kernel: audit: type=1104 audit(1768878137.229:1415): pid=8467 uid=0 auid=500 ses=82 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:17.740450 kubelet[2963]: E0120 03:02:17.739828 2963 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:02:19.717358 kubelet[2963]: E0120 03:02:19.717234 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-h8mhs" podUID="78de0405-4f44-497e-8007-519223ee3a61" Jan 20 03:02:22.253866 systemd[1]: Started sshd@82-10.0.0.129:22-10.0.0.1:38816.service - OpenSSH per-connection server daemon (10.0.0.1:38816). Jan 20 03:02:22.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@82-10.0.0.129:22-10.0.0.1:38816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:02:22.258209 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 03:02:22.258435 kernel: audit: type=1130 audit(1768878142.253:1417): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@82-10.0.0.129:22-10.0.0.1:38816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:02:22.423549 sshd[8483]: Accepted publickey for core from 10.0.0.1 port 38816 ssh2: RSA SHA256:BfTd6TFB1RrLX1rmhFbxUAISnMxD9NznezXIQHpkExQ Jan 20 03:02:22.422000 audit[8483]: USER_ACCT pid=8483 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:22.440850 sshd-session[8483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:02:22.435000 audit[8483]: CRED_ACQ pid=8483 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:22.461938 kernel: audit: type=1101 audit(1768878142.422:1418): pid=8483 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:22.462078 kernel: audit: type=1103 audit(1768878142.435:1419): pid=8483 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:22.498452 systemd-logind[1612]: New session 83 of user core. Jan 20 03:02:22.528263 kernel: audit: type=1006 audit(1768878142.435:1420): pid=8483 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=83 res=1 Jan 20 03:02:22.528349 kernel: audit: type=1300 audit(1768878142.435:1420): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf62a2d90 a2=3 a3=0 items=0 ppid=1 pid=8483 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=83 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:02:22.435000 audit[8483]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf62a2d90 a2=3 a3=0 items=0 ppid=1 pid=8483 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=83 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 03:02:22.562336 kernel: audit: type=1327 audit(1768878142.435:1420): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:02:22.435000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 03:02:22.563877 systemd[1]: Started session-83.scope - Session 83 of User core. Jan 20 03:02:22.608000 audit[8483]: USER_START pid=8483 uid=0 auid=500 ses=83 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:22.620000 audit[8486]: CRED_ACQ pid=8486 uid=0 auid=500 ses=83 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:22.665844 kernel: audit: type=1105 audit(1768878142.608:1421): pid=8483 uid=0 auid=500 ses=83 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:22.665962 kernel: audit: type=1103 audit(1768878142.620:1422): pid=8486 uid=0 auid=500 ses=83 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:22.973674 sshd[8486]: Connection closed by 10.0.0.1 port 38816 Jan 20 03:02:22.976820 sshd-session[8483]: pam_unix(sshd:session): session closed for user core Jan 20 03:02:22.985000 audit[8483]: USER_END pid=8483 uid=0 auid=500 ses=83 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:22.996576 systemd[1]: sshd@82-10.0.0.129:22-10.0.0.1:38816.service: Deactivated successfully. Jan 20 03:02:23.003966 systemd[1]: session-83.scope: Deactivated successfully. Jan 20 03:02:23.024331 kernel: audit: type=1106 audit(1768878142.985:1423): pid=8483 uid=0 auid=500 ses=83 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:23.024437 kernel: audit: type=1104 audit(1768878142.985:1424): pid=8483 uid=0 auid=500 ses=83 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:22.985000 audit[8483]: CRED_DISP pid=8483 uid=0 auid=500 ses=83 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 03:02:23.021100 systemd-logind[1612]: Session 83 logged out. Waiting for processes to exit. Jan 20 03:02:23.031829 systemd-logind[1612]: Removed session 83. Jan 20 03:02:22.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@82-10.0.0.129:22-10.0.0.1:38816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 03:02:24.723872 kubelet[2963]: E0120 03:02:24.723457 2963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-99b79f8fd-9fwc6" podUID="67615726-cef8-44da-a26c-7795f613fcbb"