Jan 28 01:43:55.852177 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 27 22:22:24 -00 2026 Jan 28 01:43:55.852402 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=71544b7bf64a92b2aba342c16b083723a12bedf106d3ddb24ccb63046196f1b3 Jan 28 01:43:55.852461 kernel: BIOS-provided physical RAM map: Jan 28 01:43:55.852473 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 28 01:43:55.852482 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 28 01:43:55.852491 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 28 01:43:55.852501 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 28 01:43:55.852510 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 28 01:43:55.852555 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 28 01:43:55.852566 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 28 01:43:55.852606 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 28 01:43:55.852616 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 28 01:43:55.852625 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 28 01:43:55.852635 kernel: NX (Execute Disable) protection: active Jan 28 01:43:55.852646 kernel: APIC: Static calls initialized Jan 28 01:43:55.852742 kernel: SMBIOS 2.8 present. Jan 28 01:43:55.852780 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 28 01:43:55.852790 kernel: DMI: Memory slots populated: 1/1 Jan 28 01:43:55.852800 kernel: Hypervisor detected: KVM Jan 28 01:43:55.852811 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 28 01:43:55.852820 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 28 01:43:55.852830 kernel: kvm-clock: using sched offset of 18157985639 cycles Jan 28 01:43:55.852841 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 28 01:43:55.852852 kernel: tsc: Detected 2445.426 MHz processor Jan 28 01:43:55.852897 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 28 01:43:55.852908 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 28 01:43:55.852919 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 28 01:43:55.852929 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 28 01:43:55.852940 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 28 01:43:55.852951 kernel: Using GB pages for direct mapping Jan 28 01:43:55.852961 kernel: ACPI: Early table checksum verification disabled Jan 28 01:43:55.853003 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 28 01:43:55.853014 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:43:55.853025 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:43:55.853036 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:43:55.853046 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 28 01:43:55.853056 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:43:55.853067 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:43:55.853109 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:43:55.853121 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:43:55.853163 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 28 01:43:55.853175 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 28 01:43:55.853186 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 28 01:43:55.863030 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 28 01:43:55.863053 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 28 01:43:55.863066 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 28 01:43:55.863078 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 28 01:43:55.863092 kernel: No NUMA configuration found Jan 28 01:43:55.863103 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 28 01:43:55.863125 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 28 01:43:55.863182 kernel: Zone ranges: Jan 28 01:43:55.863197 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 28 01:43:55.863209 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 28 01:43:55.865346 kernel: Normal empty Jan 28 01:43:55.865369 kernel: Device empty Jan 28 01:43:55.865381 kernel: Movable zone start for each node Jan 28 01:43:55.865394 kernel: Early memory node ranges Jan 28 01:43:55.865456 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 28 01:43:55.865472 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 28 01:43:55.865484 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 28 01:43:55.865498 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 28 01:43:55.865512 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 28 01:43:55.865565 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 28 01:43:55.865579 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 28 01:43:55.870952 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 28 01:43:55.870974 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 28 01:43:55.870988 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 28 01:43:55.871037 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 28 01:43:55.871050 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 28 01:43:55.871062 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 28 01:43:55.871073 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 28 01:43:55.871084 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 28 01:43:55.871130 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 28 01:43:55.871142 kernel: TSC deadline timer available Jan 28 01:43:55.871154 kernel: CPU topo: Max. logical packages: 1 Jan 28 01:43:55.871165 kernel: CPU topo: Max. logical dies: 1 Jan 28 01:43:55.871175 kernel: CPU topo: Max. dies per package: 1 Jan 28 01:43:55.871187 kernel: CPU topo: Max. threads per core: 1 Jan 28 01:43:55.871198 kernel: CPU topo: Num. cores per package: 4 Jan 28 01:43:55.871286 kernel: CPU topo: Num. threads per package: 4 Jan 28 01:43:55.871299 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 28 01:43:55.871311 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 28 01:43:55.871324 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 28 01:43:55.871336 kernel: kvm-guest: setup PV sched yield Jan 28 01:43:55.871350 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 28 01:43:55.871361 kernel: Booting paravirtualized kernel on KVM Jan 28 01:43:55.871373 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 28 01:43:55.876981 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 28 01:43:55.876995 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 28 01:43:55.877006 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 28 01:43:55.877018 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 28 01:43:55.877029 kernel: kvm-guest: PV spinlocks enabled Jan 28 01:43:55.877040 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 28 01:43:55.877054 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=71544b7bf64a92b2aba342c16b083723a12bedf106d3ddb24ccb63046196f1b3 Jan 28 01:43:55.877105 kernel: random: crng init done Jan 28 01:43:55.877117 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 28 01:43:55.877130 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 28 01:43:55.877142 kernel: Fallback order for Node 0: 0 Jan 28 01:43:55.877154 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 28 01:43:55.877166 kernel: Policy zone: DMA32 Jan 28 01:43:55.877213 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 28 01:43:55.877262 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 28 01:43:55.877273 kernel: ftrace: allocating 40128 entries in 157 pages Jan 28 01:43:55.877287 kernel: ftrace: allocated 157 pages with 5 groups Jan 28 01:43:55.877300 kernel: Dynamic Preempt: voluntary Jan 28 01:43:55.877310 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 28 01:43:55.877323 kernel: rcu: RCU event tracing is enabled. Jan 28 01:43:55.877337 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 28 01:43:55.877391 kernel: Trampoline variant of Tasks RCU enabled. Jan 28 01:43:55.877429 kernel: Rude variant of Tasks RCU enabled. Jan 28 01:43:55.877440 kernel: Tracing variant of Tasks RCU enabled. Jan 28 01:43:55.877451 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 28 01:43:55.877462 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 28 01:43:55.877473 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 01:43:55.877485 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 01:43:55.877530 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 01:43:55.877542 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 28 01:43:55.877554 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 28 01:43:55.877645 kernel: Console: colour VGA+ 80x25 Jan 28 01:43:55.882809 kernel: printk: legacy console [ttyS0] enabled Jan 28 01:43:55.882825 kernel: ACPI: Core revision 20240827 Jan 28 01:43:55.882837 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 28 01:43:55.882851 kernel: APIC: Switch to symmetric I/O mode setup Jan 28 01:43:55.882863 kernel: x2apic enabled Jan 28 01:43:55.882916 kernel: APIC: Switched APIC routing to: physical x2apic Jan 28 01:43:55.882981 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 28 01:43:55.882999 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 28 01:43:55.883015 kernel: kvm-guest: setup PV IPIs Jan 28 01:43:55.883071 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 28 01:43:55.883085 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 28 01:43:55.883099 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 28 01:43:55.883112 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 28 01:43:55.883124 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 28 01:43:55.883139 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 28 01:43:55.883150 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 28 01:43:55.883312 kernel: Spectre V2 : Mitigation: Retpolines Jan 28 01:43:55.883329 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 28 01:43:55.883343 kernel: Speculative Store Bypass: Vulnerable Jan 28 01:43:55.883356 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 28 01:43:55.883371 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 28 01:43:55.883384 kernel: active return thunk: srso_alias_return_thunk Jan 28 01:43:55.883397 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 28 01:43:55.883464 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 28 01:43:55.883478 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 28 01:43:55.883492 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 28 01:43:55.883505 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 28 01:43:55.883518 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 28 01:43:55.883531 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 28 01:43:55.883544 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 28 01:43:55.883603 kernel: Freeing SMP alternatives memory: 32K Jan 28 01:43:55.883617 kernel: pid_max: default: 32768 minimum: 301 Jan 28 01:43:55.883629 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 28 01:43:55.883642 kernel: landlock: Up and running. Jan 28 01:43:55.883656 kernel: SELinux: Initializing. Jan 28 01:43:55.883821 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 01:43:55.883838 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 01:43:55.885528 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 28 01:43:55.885548 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 28 01:43:55.885561 kernel: signal: max sigframe size: 1776 Jan 28 01:43:55.885573 kernel: rcu: Hierarchical SRCU implementation. Jan 28 01:43:55.885585 kernel: rcu: Max phase no-delay instances is 400. Jan 28 01:43:55.885597 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 28 01:43:55.885611 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 28 01:43:55.885758 kernel: smp: Bringing up secondary CPUs ... Jan 28 01:43:55.885770 kernel: smpboot: x86: Booting SMP configuration: Jan 28 01:43:55.885781 kernel: .... node #0, CPUs: #1 #2 #3 Jan 28 01:43:55.885794 kernel: smp: Brought up 1 node, 4 CPUs Jan 28 01:43:55.885807 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 28 01:43:55.885820 kernel: Memory: 2445284K/2571752K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 120528K reserved, 0K cma-reserved) Jan 28 01:43:55.885833 kernel: devtmpfs: initialized Jan 28 01:43:55.885885 kernel: x86/mm: Memory block size: 128MB Jan 28 01:43:55.885897 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 28 01:43:55.885910 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 28 01:43:55.885922 kernel: pinctrl core: initialized pinctrl subsystem Jan 28 01:43:55.885935 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 28 01:43:55.885946 kernel: audit: initializing netlink subsys (disabled) Jan 28 01:43:55.885958 kernel: audit: type=2000 audit(1769564612.826:1): state=initialized audit_enabled=0 res=1 Jan 28 01:43:55.886008 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 28 01:43:55.886020 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 28 01:43:55.886031 kernel: cpuidle: using governor menu Jan 28 01:43:55.886043 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 28 01:43:55.886055 kernel: dca service started, version 1.12.1 Jan 28 01:43:55.886067 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 28 01:43:55.886080 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 28 01:43:55.886135 kernel: PCI: Using configuration type 1 for base access Jan 28 01:43:55.886148 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 28 01:43:55.886162 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 28 01:43:55.886174 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 28 01:43:55.886186 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 28 01:43:55.886196 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 28 01:43:55.886208 kernel: ACPI: Added _OSI(Module Device) Jan 28 01:43:55.886287 kernel: ACPI: Added _OSI(Processor Device) Jan 28 01:43:55.886300 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 28 01:43:55.886311 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 28 01:43:55.886322 kernel: ACPI: Interpreter enabled Jan 28 01:43:55.886334 kernel: ACPI: PM: (supports S0 S3 S5) Jan 28 01:43:55.886348 kernel: ACPI: Using IOAPIC for interrupt routing Jan 28 01:43:55.886360 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 28 01:43:55.886408 kernel: PCI: Using E820 reservations for host bridge windows Jan 28 01:43:55.886420 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 28 01:43:55.886432 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 28 01:43:55.887313 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 28 01:43:55.887613 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 28 01:43:55.888101 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 28 01:43:55.888169 kernel: PCI host bridge to bus 0000:00 Jan 28 01:43:55.888496 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 28 01:43:55.888845 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 28 01:43:55.889111 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 28 01:43:55.889423 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 28 01:43:55.889754 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 28 01:43:55.890061 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 28 01:43:55.890543 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 28 01:43:55.890979 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 28 01:43:55.891341 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 28 01:43:55.891776 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 28 01:43:55.892105 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 28 01:43:55.892439 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 28 01:43:55.892791 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 28 01:43:55.893072 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 23437 usecs Jan 28 01:43:55.893435 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 28 01:43:55.893847 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 28 01:43:55.895636 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 28 01:43:55.895992 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 28 01:43:55.896336 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 28 01:43:55.896611 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 28 01:43:55.897031 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 28 01:43:55.897366 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 28 01:43:55.897783 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 28 01:43:55.909974 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 28 01:43:55.910384 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 28 01:43:55.910926 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 28 01:43:55.911299 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 28 01:43:55.911843 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 28 01:43:55.912143 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 28 01:43:55.912493 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 28 01:43:55.912942 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 28 01:43:55.913291 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 28 01:43:55.914267 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 28 01:43:55.918175 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 28 01:43:55.918199 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 28 01:43:55.918212 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 28 01:43:55.918399 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 28 01:43:55.918414 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 28 01:43:55.918426 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 28 01:43:55.938376 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 28 01:43:55.938399 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 28 01:43:55.938413 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 28 01:43:55.938426 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 28 01:43:55.938439 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 28 01:43:55.938452 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 28 01:43:55.938464 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 28 01:43:55.938597 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 28 01:43:55.938614 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 28 01:43:55.938626 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 28 01:43:55.938637 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 28 01:43:55.938650 kernel: iommu: Default domain type: Translated Jan 28 01:43:55.938662 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 28 01:43:55.938797 kernel: PCI: Using ACPI for IRQ routing Jan 28 01:43:55.938859 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 28 01:43:55.938874 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 28 01:43:55.938889 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 28 01:43:55.939465 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 28 01:43:55.939839 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 28 01:43:55.940130 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 28 01:43:55.940153 kernel: vgaarb: loaded Jan 28 01:43:55.940260 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 28 01:43:55.940279 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 28 01:43:55.940291 kernel: clocksource: Switched to clocksource kvm-clock Jan 28 01:43:55.940302 kernel: VFS: Disk quotas dquot_6.6.0 Jan 28 01:43:55.940314 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 28 01:43:55.940327 kernel: pnp: PnP ACPI init Jan 28 01:43:55.940823 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 28 01:43:55.940897 kernel: pnp: PnP ACPI: found 6 devices Jan 28 01:43:55.940910 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 28 01:43:55.940926 kernel: NET: Registered PF_INET protocol family Jan 28 01:43:55.940937 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 28 01:43:55.940949 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 28 01:43:55.940960 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 28 01:43:55.940975 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 28 01:43:55.941032 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 28 01:43:55.941044 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 28 01:43:55.941055 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 01:43:55.941066 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 01:43:55.941079 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 28 01:43:55.941090 kernel: NET: Registered PF_XDP protocol family Jan 28 01:43:55.941422 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 28 01:43:55.941808 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 28 01:43:55.942060 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 28 01:43:55.942376 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 28 01:43:55.942656 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 28 01:43:55.943010 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 28 01:43:55.943029 kernel: PCI: CLS 0 bytes, default 64 Jan 28 01:43:55.943042 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 28 01:43:55.943107 kernel: Initialise system trusted keyrings Jan 28 01:43:55.943118 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 28 01:43:55.943132 kernel: Key type asymmetric registered Jan 28 01:43:55.943144 kernel: Asymmetric key parser 'x509' registered Jan 28 01:43:55.943155 kernel: hrtimer: interrupt took 10792038 ns Jan 28 01:43:55.943167 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 28 01:43:55.943179 kernel: io scheduler mq-deadline registered Jan 28 01:43:55.943272 kernel: io scheduler kyber registered Jan 28 01:43:55.943284 kernel: io scheduler bfq registered Jan 28 01:43:55.943296 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 28 01:43:55.943310 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 28 01:43:55.943322 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 28 01:43:55.943333 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 28 01:43:55.943346 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 28 01:43:55.943406 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 28 01:43:55.943419 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 28 01:43:55.943430 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 28 01:43:55.943443 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 28 01:43:55.943801 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 28 01:43:55.944068 kernel: rtc_cmos 00:04: registered as rtc0 Jan 28 01:43:55.944130 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 28 01:43:55.944436 kernel: rtc_cmos 00:04: setting system clock to 2026-01-28T01:43:43 UTC (1769564623) Jan 28 01:43:55.944767 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 28 01:43:55.944785 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 28 01:43:55.944799 kernel: NET: Registered PF_INET6 protocol family Jan 28 01:43:55.944812 kernel: Segment Routing with IPv6 Jan 28 01:43:55.944823 kernel: In-situ OAM (IOAM) with IPv6 Jan 28 01:43:55.944881 kernel: NET: Registered PF_PACKET protocol family Jan 28 01:43:55.944893 kernel: Key type dns_resolver registered Jan 28 01:43:55.944907 kernel: IPI shorthand broadcast: enabled Jan 28 01:43:55.944919 kernel: sched_clock: Marking stable (8775044059, 897742915)->(11082295171, -1409508197) Jan 28 01:43:55.944930 kernel: registered taskstats version 1 Jan 28 01:43:55.944941 kernel: Loading compiled-in X.509 certificates Jan 28 01:43:55.944956 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 0eb3c2aae9988d4ab7f0e142c4f5c61453c9ddb3' Jan 28 01:43:55.945008 kernel: Demotion targets for Node 0: null Jan 28 01:43:55.945020 kernel: Key type .fscrypt registered Jan 28 01:43:55.945030 kernel: Key type fscrypt-provisioning registered Jan 28 01:43:55.945045 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 28 01:43:55.945058 kernel: ima: Allocated hash algorithm: sha1 Jan 28 01:43:55.945069 kernel: ima: No architecture policies found Jan 28 01:43:55.945079 kernel: clk: Disabling unused clocks Jan 28 01:43:55.945134 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 28 01:43:55.945147 kernel: Write protecting the kernel read-only data: 47104k Jan 28 01:43:55.945158 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 28 01:43:55.945170 kernel: Run /init as init process Jan 28 01:43:55.945184 kernel: with arguments: Jan 28 01:43:55.945199 kernel: /init Jan 28 01:43:55.945210 kernel: with environment: Jan 28 01:43:55.945273 kernel: HOME=/ Jan 28 01:43:55.945326 kernel: TERM=linux Jan 28 01:43:55.945338 kernel: SCSI subsystem initialized Jan 28 01:43:55.945348 kernel: libata version 3.00 loaded. Jan 28 01:43:55.945656 kernel: ahci 0000:00:1f.2: version 3.0 Jan 28 01:43:55.945758 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 28 01:43:55.946056 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 28 01:43:55.946465 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 28 01:43:55.946821 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 28 01:43:55.947296 kernel: scsi host0: ahci Jan 28 01:43:55.947605 kernel: scsi host1: ahci Jan 28 01:43:55.948201 kernel: scsi host2: ahci Jan 28 01:43:55.948568 kernel: scsi host3: ahci Jan 28 01:43:55.949004 kernel: scsi host4: ahci Jan 28 01:43:55.949384 kernel: scsi host5: ahci Jan 28 01:43:55.949409 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Jan 28 01:43:55.949425 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Jan 28 01:43:55.949437 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Jan 28 01:43:55.949504 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Jan 28 01:43:55.949517 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Jan 28 01:43:55.949529 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Jan 28 01:43:55.949542 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 28 01:43:55.949557 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 28 01:43:55.949569 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 28 01:43:55.949582 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 28 01:43:55.949596 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 28 01:43:55.949652 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 28 01:43:55.949734 kernel: ata3.00: LPM support broken, forcing max_power Jan 28 01:43:55.949752 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 28 01:43:55.949764 kernel: ata3.00: applying bridge limits Jan 28 01:43:55.949776 kernel: ata3.00: LPM support broken, forcing max_power Jan 28 01:43:55.949787 kernel: ata3.00: configured for UDMA/100 Jan 28 01:43:55.950299 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 28 01:43:55.950742 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 28 01:43:55.951045 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 28 01:43:55.951068 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 28 01:43:55.951488 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 28 01:43:55.951511 kernel: GPT:16515071 != 27000831 Jan 28 01:43:55.951531 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 28 01:43:55.951545 kernel: GPT:16515071 != 27000831 Jan 28 01:43:55.951560 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 28 01:43:55.951573 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 01:43:55.951586 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 28 01:43:55.951983 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 28 01:43:55.952003 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 28 01:43:55.952070 kernel: device-mapper: uevent: version 1.0.3 Jan 28 01:43:55.952086 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 28 01:43:55.952099 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 28 01:43:55.952112 kernel: raid6: avx2x4 gen() 10223 MB/s Jan 28 01:43:55.952126 kernel: raid6: avx2x2 gen() 7126 MB/s Jan 28 01:43:55.952139 kernel: raid6: avx2x1 gen() 4949 MB/s Jan 28 01:43:55.952152 kernel: raid6: using algorithm avx2x4 gen() 10223 MB/s Jan 28 01:43:55.952200 kernel: raid6: .... xor() 3073 MB/s, rmw enabled Jan 28 01:43:55.952214 kernel: raid6: using avx2x2 recovery algorithm Jan 28 01:43:55.952271 kernel: xor: automatically using best checksumming function avx Jan 28 01:43:55.952285 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 28 01:43:55.952304 kernel: BTRFS: device fsid 0f5fa021-4357-40bb-b32a-e1579c5824ad devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (181) Jan 28 01:43:55.952515 kernel: BTRFS info (device dm-0): first mount of filesystem 0f5fa021-4357-40bb-b32a-e1579c5824ad Jan 28 01:43:55.952529 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:43:55.952543 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 28 01:43:55.952557 kernel: BTRFS info (device dm-0): enabling free space tree Jan 28 01:43:55.952570 kernel: loop: module loaded Jan 28 01:43:55.952621 kernel: loop0: detected capacity change from 0 to 100552 Jan 28 01:43:55.952635 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 28 01:43:55.952761 systemd[1]: Successfully made /usr/ read-only. Jan 28 01:43:55.952781 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 28 01:43:55.952795 systemd[1]: Detected virtualization kvm. Jan 28 01:43:55.952809 systemd[1]: Detected architecture x86-64. Jan 28 01:43:55.952822 systemd[1]: Running in initrd. Jan 28 01:43:55.952835 systemd[1]: No hostname configured, using default hostname. Jan 28 01:43:55.952892 systemd[1]: Hostname set to . Jan 28 01:43:55.952904 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 28 01:43:55.952919 systemd[1]: Queued start job for default target initrd.target. Jan 28 01:43:55.952934 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 28 01:43:55.952946 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:43:55.952958 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:43:55.953018 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 28 01:43:55.953035 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 01:43:55.953051 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 28 01:43:55.953066 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 28 01:43:55.953081 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:43:55.953097 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:43:55.953163 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 28 01:43:55.953180 systemd[1]: Reached target paths.target - Path Units. Jan 28 01:43:55.953194 systemd[1]: Reached target slices.target - Slice Units. Jan 28 01:43:55.953210 systemd[1]: Reached target swap.target - Swaps. Jan 28 01:43:55.953266 systemd[1]: Reached target timers.target - Timer Units. Jan 28 01:43:55.953280 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 01:43:55.953294 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 01:43:55.953358 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 28 01:43:55.953372 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 01:43:55.953385 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 28 01:43:55.953398 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:43:55.953412 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 01:43:55.953428 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:43:55.953493 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 01:43:55.953509 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 28 01:43:55.953523 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 28 01:43:55.953538 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 01:43:55.953550 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 28 01:43:55.953562 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 28 01:43:55.953576 systemd[1]: Starting systemd-fsck-usr.service... Jan 28 01:43:55.953642 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 01:43:55.953655 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 01:43:55.953733 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:43:55.954085 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 28 01:43:55.954100 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:43:55.954211 systemd-journald[321]: Collecting audit messages is enabled. Jan 28 01:43:55.954290 systemd[1]: Finished systemd-fsck-usr.service. Jan 28 01:43:55.954360 kernel: audit: type=1130 audit(1769564635.762:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:43:55.954375 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 01:43:55.954390 systemd-journald[321]: Journal started Jan 28 01:43:55.954414 systemd-journald[321]: Runtime Journal (/run/log/journal/419149a44f964b778968df4813c218c8) is 6M, max 48.2M, 42.1M free. Jan 28 01:43:55.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:43:55.966551 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 01:43:55.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:43:55.996982 kernel: audit: type=1130 audit(1769564635.963:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:43:56.015005 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 01:43:56.383454 systemd-tmpfiles[333]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 28 01:43:56.412903 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 01:43:56.496754 kernel: audit: type=1130 audit(1769564636.411:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:43:56.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:43:56.495849 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 01:43:56.499449 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:43:57.139216 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 28 01:43:57.139416 kernel: Bridge firewalling registered Jan 28 01:43:56.686857 systemd-modules-load[323]: Inserted module 'br_netfilter' Jan 28 01:43:57.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:43:57.190031 kernel: audit: type=1130 audit(1769564637.164:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:43:57.191624 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 01:43:57.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:43:57.209540 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:43:57.330658 kernel: audit: type=1130 audit(1769564637.204:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:43:57.330777 kernel: audit: type=1130 audit(1769564637.286:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:43:57.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:43:57.330372 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:43:57.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:43:57.442149 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:43:57.488633 kernel: audit: type=1130 audit(1769564637.410:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:43:57.488569 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 01:43:57.644011 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:43:57.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:43:57.677582 kernel: audit: type=1130 audit(1769564637.658:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:43:57.674488 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:43:57.734602 kernel: audit: type=1130 audit(1769564637.683:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:43:57.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:43:57.704067 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 28 01:43:57.752000 audit: BPF prog-id=6 op=LOAD Jan 28 01:43:57.759920 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 01:43:57.816454 kernel: audit: type=1334 audit(1769564637.752:11): prog-id=6 op=LOAD Jan 28 01:43:57.917366 dracut-cmdline[356]: dracut-109 Jan 28 01:43:57.948894 dracut-cmdline[356]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=71544b7bf64a92b2aba342c16b083723a12bedf106d3ddb24ccb63046196f1b3 Jan 28 01:43:58.256975 systemd-resolved[357]: Positive Trust Anchors: Jan 28 01:43:58.257032 systemd-resolved[357]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 01:43:58.257039 systemd-resolved[357]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 28 01:43:58.257083 systemd-resolved[357]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 01:43:58.601161 systemd-resolved[357]: Defaulting to hostname 'linux'. Jan 28 01:43:58.634607 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 01:43:58.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:43:58.681754 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:43:59.399390 kernel: Loading iSCSI transport class v2.0-870. Jan 28 01:43:59.499180 kernel: iscsi: registered transport (tcp) Jan 28 01:43:59.613975 kernel: iscsi: registered transport (qla4xxx) Jan 28 01:43:59.614058 kernel: QLogic iSCSI HBA Driver Jan 28 01:44:00.135027 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 01:44:00.266351 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 01:44:00.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:00.635972 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 01:44:01.145936 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 28 01:44:01.240157 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 28 01:44:01.240233 kernel: audit: type=1130 audit(1769564641.180:14): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:01.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:01.252774 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 28 01:44:01.301452 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 28 01:44:01.865827 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 28 01:44:01.981322 kernel: audit: type=1130 audit(1769564641.902:15): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:01.981479 kernel: audit: type=1334 audit(1769564641.913:16): prog-id=7 op=LOAD Jan 28 01:44:01.981501 kernel: audit: type=1334 audit(1769564641.913:17): prog-id=8 op=LOAD Jan 28 01:44:01.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:01.913000 audit: BPF prog-id=7 op=LOAD Jan 28 01:44:01.913000 audit: BPF prog-id=8 op=LOAD Jan 28 01:44:01.964299 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:44:02.270048 systemd-udevd[587]: Using default interface naming scheme 'v257'. Jan 28 01:44:02.463805 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:44:02.573048 kernel: audit: type=1130 audit(1769564642.479:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:02.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:02.494848 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 28 01:44:02.829123 dracut-pre-trigger[639]: rd.md=0: removing MD RAID activation Jan 28 01:44:02.893195 kernel: audit: type=1130 audit(1769564642.832:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:02.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:02.831361 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 01:44:02.911000 audit: BPF prog-id=9 op=LOAD Jan 28 01:44:02.929393 kernel: audit: type=1334 audit(1769564642.911:20): prog-id=9 op=LOAD Jan 28 01:44:02.957586 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 01:44:03.299362 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 01:44:03.341893 kernel: audit: type=1130 audit(1769564643.306:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:03.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:03.310379 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 01:44:03.379986 systemd-networkd[702]: lo: Link UP Jan 28 01:44:03.379993 systemd-networkd[702]: lo: Gained carrier Jan 28 01:44:03.469504 kernel: audit: type=1130 audit(1769564643.410:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:03.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:03.390184 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 01:44:03.412125 systemd[1]: Reached target network.target - Network. Jan 28 01:44:03.870160 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:44:03.943849 kernel: audit: type=1130 audit(1769564643.886:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:03.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:03.897548 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 28 01:44:04.485500 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 28 01:44:04.587979 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 28 01:44:04.697990 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 28 01:44:05.061163 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 01:44:05.108751 kernel: cryptd: max_cpu_qlen set to 1000 Jan 28 01:44:05.090030 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 28 01:44:05.144034 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:44:05.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:05.145435 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:44:05.154583 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:44:05.194858 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:44:05.304633 systemd-networkd[702]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 01:44:05.312619 systemd-networkd[702]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 01:44:05.442340 systemd-networkd[702]: eth0: Link UP Jan 28 01:44:05.469642 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 28 01:44:05.468540 systemd-networkd[702]: eth0: Gained carrier Jan 28 01:44:05.468562 systemd-networkd[702]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 01:44:05.539832 disk-uuid[768]: Primary Header is updated. Jan 28 01:44:05.539832 disk-uuid[768]: Secondary Entries is updated. Jan 28 01:44:05.539832 disk-uuid[768]: Secondary Header is updated. Jan 28 01:44:05.596926 systemd-networkd[702]: eth0: DHCPv4 address 10.0.0.85/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 28 01:44:05.946790 kernel: AES CTR mode by8 optimization enabled Jan 28 01:44:06.873001 systemd-networkd[702]: eth0: Gained IPv6LL Jan 28 01:44:07.039235 disk-uuid[771]: Warning: The kernel is still using the old partition table. Jan 28 01:44:07.039235 disk-uuid[771]: The new table will be used at the next reboot or after you Jan 28 01:44:07.039235 disk-uuid[771]: run partprobe(8) or kpartx(8) Jan 28 01:44:07.039235 disk-uuid[771]: The operation has completed successfully. Jan 28 01:44:07.190776 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:44:07.190858 kernel: audit: type=1130 audit(1769564647.161:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:07.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:07.126422 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:44:07.257089 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 28 01:44:07.257399 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 28 01:44:07.494798 kernel: audit: type=1130 audit(1769564647.310:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:07.494949 kernel: audit: type=1131 audit(1769564647.310:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:07.494969 kernel: audit: type=1130 audit(1769564647.380:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:07.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:07.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:07.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:07.312342 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 28 01:44:07.392848 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 01:44:07.427403 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:44:07.427657 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 01:44:07.494225 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 28 01:44:07.554922 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 28 01:44:07.816189 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 28 01:44:07.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:07.871083 kernel: audit: type=1130 audit(1769564647.853:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:07.871157 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (853) Jan 28 01:44:07.891996 kernel: BTRFS info (device vda6): first mount of filesystem 886243c7-f2f0-4861-ae6f-419cdf70e432 Jan 28 01:44:07.892124 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:44:07.953433 kernel: BTRFS info (device vda6): turning on async discard Jan 28 01:44:07.954100 kernel: BTRFS info (device vda6): enabling free space tree Jan 28 01:44:08.073661 kernel: BTRFS info (device vda6): last unmount of filesystem 886243c7-f2f0-4861-ae6f-419cdf70e432 Jan 28 01:44:08.111594 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 28 01:44:08.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:08.195454 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 28 01:44:08.237538 kernel: audit: type=1130 audit(1769564648.160:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:10.656370 ignition[877]: Ignition 2.24.0 Jan 28 01:44:10.656449 ignition[877]: Stage: fetch-offline Jan 28 01:44:10.656780 ignition[877]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:44:10.656797 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:44:10.657456 ignition[877]: parsed url from cmdline: "" Jan 28 01:44:10.657464 ignition[877]: no config URL provided Jan 28 01:44:10.658776 ignition[877]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 01:44:10.658840 ignition[877]: no config at "/usr/lib/ignition/user.ign" Jan 28 01:44:10.659126 ignition[877]: op(1): [started] loading QEMU firmware config module Jan 28 01:44:10.659135 ignition[877]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 28 01:44:10.737170 ignition[877]: op(1): [finished] loading QEMU firmware config module Jan 28 01:44:11.406270 ignition[877]: parsing config with SHA512: 7c2b3f6345426775d30437f0e9e0faa8d649dae9a2dad46be333ee189d4d857609e9d9b7a49e05632e995bb65d1afe134910962517366159458a1effa76d36ce Jan 28 01:44:11.509914 unknown[877]: fetched base config from "system" Jan 28 01:44:11.509937 unknown[877]: fetched user config from "qemu" Jan 28 01:44:11.614631 kernel: audit: type=1130 audit(1769564651.566:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:11.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:11.546032 ignition[877]: fetch-offline: fetch-offline passed Jan 28 01:44:11.558660 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 01:44:11.546572 ignition[877]: Ignition finished successfully Jan 28 01:44:11.567381 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 28 01:44:11.572519 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 28 01:44:11.918898 ignition[888]: Ignition 2.24.0 Jan 28 01:44:11.918920 ignition[888]: Stage: kargs Jan 28 01:44:11.919248 ignition[888]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:44:11.919268 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:44:11.934164 ignition[888]: kargs: kargs passed Jan 28 01:44:11.934256 ignition[888]: Ignition finished successfully Jan 28 01:44:12.033608 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 28 01:44:12.097781 kernel: audit: type=1130 audit(1769564652.054:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:12.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:12.084439 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 28 01:44:12.262443 ignition[896]: Ignition 2.24.0 Jan 28 01:44:12.262461 ignition[896]: Stage: disks Jan 28 01:44:12.267486 ignition[896]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:44:12.267505 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:44:12.271973 ignition[896]: disks: disks passed Jan 28 01:44:12.272051 ignition[896]: Ignition finished successfully Jan 28 01:44:12.327959 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 28 01:44:12.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:12.405272 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 28 01:44:12.432451 kernel: audit: type=1130 audit(1769564652.371:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:12.458852 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 01:44:12.480246 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 01:44:12.480413 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 01:44:12.480506 systemd[1]: Reached target basic.target - Basic System. Jan 28 01:44:12.553166 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 28 01:44:13.032551 systemd-fsck[906]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 28 01:44:13.051023 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 28 01:44:13.152766 kernel: audit: type=1130 audit(1769564653.066:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:13.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:13.070863 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 28 01:44:13.796955 kernel: EXT4-fs (vda9): mounted filesystem 60a46795-cc10-4076-a709-d039d1c23a6b r/w with ordered data mode. Quota mode: none. Jan 28 01:44:13.803923 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 28 01:44:13.810114 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 28 01:44:13.834053 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 01:44:13.888967 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 28 01:44:13.902110 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 28 01:44:13.902407 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 28 01:44:13.944527 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (915) Jan 28 01:44:13.902453 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 01:44:13.973371 kernel: BTRFS info (device vda6): first mount of filesystem 886243c7-f2f0-4861-ae6f-419cdf70e432 Jan 28 01:44:13.973646 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:44:13.989434 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 28 01:44:14.013532 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 28 01:44:14.046410 kernel: BTRFS info (device vda6): turning on async discard Jan 28 01:44:14.046470 kernel: BTRFS info (device vda6): enabling free space tree Jan 28 01:44:14.056999 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 01:44:14.992152 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 28 01:44:15.081812 kernel: audit: type=1130 audit(1769564655.005:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:15.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:15.015197 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 28 01:44:15.050962 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 28 01:44:15.361550 kernel: BTRFS info (device vda6): last unmount of filesystem 886243c7-f2f0-4861-ae6f-419cdf70e432 Jan 28 01:44:15.369512 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 28 01:44:15.609568 ignition[1012]: INFO : Ignition 2.24.0 Jan 28 01:44:15.609568 ignition[1012]: INFO : Stage: mount Jan 28 01:44:15.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:15.682954 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:44:15.682954 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:44:15.781278 kernel: audit: type=1130 audit(1769564655.651:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:15.610542 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 28 01:44:15.804547 ignition[1012]: INFO : mount: mount passed Jan 28 01:44:15.804547 ignition[1012]: INFO : Ignition finished successfully Jan 28 01:44:15.848092 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 28 01:44:15.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:15.944016 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 28 01:44:15.972430 kernel: audit: type=1130 audit(1769564655.909:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:16.061162 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 01:44:16.185018 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1024) Jan 28 01:44:16.246828 kernel: BTRFS info (device vda6): first mount of filesystem 886243c7-f2f0-4861-ae6f-419cdf70e432 Jan 28 01:44:16.246958 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:44:16.359218 kernel: BTRFS info (device vda6): turning on async discard Jan 28 01:44:16.359930 kernel: BTRFS info (device vda6): enabling free space tree Jan 28 01:44:16.401367 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 01:44:16.786626 ignition[1041]: INFO : Ignition 2.24.0 Jan 28 01:44:16.786626 ignition[1041]: INFO : Stage: files Jan 28 01:44:16.786626 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:44:16.786626 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:44:16.904487 ignition[1041]: DEBUG : files: compiled without relabeling support, skipping Jan 28 01:44:16.904487 ignition[1041]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 28 01:44:16.904487 ignition[1041]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 28 01:44:17.142088 ignition[1041]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 28 01:44:17.158512 ignition[1041]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 28 01:44:17.158512 ignition[1041]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 28 01:44:17.154083 unknown[1041]: wrote ssh authorized keys file for user: core Jan 28 01:44:17.211464 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 28 01:44:17.211464 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 28 01:44:17.597294 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 28 01:44:18.735661 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 28 01:44:18.769438 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 28 01:44:18.769438 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 28 01:44:18.769438 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 28 01:44:18.769438 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 28 01:44:18.769438 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 01:44:18.769438 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 01:44:18.769438 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 01:44:18.769438 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 01:44:19.084013 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 01:44:19.084013 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 01:44:19.084013 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 28 01:44:19.084013 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 28 01:44:19.084013 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 28 01:44:19.084013 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 28 01:44:19.583120 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 28 01:44:29.172017 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 28 01:44:29.172017 ignition[1041]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 28 01:44:29.301900 ignition[1041]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 01:44:29.301900 ignition[1041]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 01:44:29.301900 ignition[1041]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 28 01:44:29.301900 ignition[1041]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 28 01:44:29.301900 ignition[1041]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 28 01:44:29.301900 ignition[1041]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 28 01:44:29.301900 ignition[1041]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 28 01:44:29.301900 ignition[1041]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 28 01:44:29.861313 ignition[1041]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 28 01:44:29.946765 ignition[1041]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 28 01:44:29.946765 ignition[1041]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 28 01:44:29.946765 ignition[1041]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 28 01:44:29.946765 ignition[1041]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 28 01:44:29.946765 ignition[1041]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 28 01:44:29.946765 ignition[1041]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 28 01:44:29.946765 ignition[1041]: INFO : files: files passed Jan 28 01:44:29.946765 ignition[1041]: INFO : Ignition finished successfully Jan 28 01:44:30.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:30.043518 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 28 01:44:30.197067 kernel: audit: type=1130 audit(1769564670.153:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:30.207249 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 28 01:44:30.256746 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 28 01:44:30.547000 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 28 01:44:30.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:30.586299 initrd-setup-root-after-ignition[1072]: grep: /sysroot/oem/oem-release: No such file or directory Jan 28 01:44:30.660278 kernel: audit: type=1130 audit(1769564670.546:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:30.660317 kernel: audit: type=1131 audit(1769564670.546:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:30.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:30.547262 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 28 01:44:30.674579 initrd-setup-root-after-ignition[1074]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:44:30.674579 initrd-setup-root-after-ignition[1074]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:44:30.734106 initrd-setup-root-after-ignition[1078]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:44:30.705777 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 01:44:30.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:30.759182 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 28 01:44:30.802407 kernel: audit: type=1130 audit(1769564670.756:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:30.801648 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 28 01:44:31.342028 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 28 01:44:31.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:31.472773 kernel: audit: type=1130 audit(1769564671.361:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:31.472819 kernel: audit: type=1131 audit(1769564671.361:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:31.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:31.343809 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 28 01:44:31.363317 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 28 01:44:31.365657 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 28 01:44:31.541560 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 28 01:44:31.554773 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 28 01:44:31.805132 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 01:44:31.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:31.845820 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 28 01:44:31.876726 kernel: audit: type=1130 audit(1769564671.831:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:31.940142 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 28 01:44:31.940564 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:44:31.992729 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:44:32.044333 systemd[1]: Stopped target timers.target - Timer Units. Jan 28 01:44:32.078520 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 28 01:44:32.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:32.083937 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 01:44:32.136562 kernel: audit: type=1131 audit(1769564672.105:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:32.131751 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 28 01:44:32.171627 systemd[1]: Stopped target basic.target - Basic System. Jan 28 01:44:32.180232 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 28 01:44:32.223927 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 01:44:32.237576 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 28 01:44:32.247914 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 28 01:44:32.290258 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 28 01:44:32.426249 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 01:44:32.485958 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 28 01:44:32.529524 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 28 01:44:32.541790 systemd[1]: Stopped target swap.target - Swaps. Jan 28 01:44:32.568858 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 28 01:44:32.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:32.569092 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 28 01:44:32.700173 kernel: audit: type=1131 audit(1769564672.607:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:32.608594 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:44:32.608870 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:44:32.608975 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 28 01:44:32.667229 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:44:32.690929 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 28 01:44:32.691371 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 28 01:44:32.808250 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 28 01:44:32.904420 kernel: audit: type=1131 audit(1769564672.806:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:32.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:32.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:32.808530 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 01:44:32.853116 systemd[1]: Stopped target paths.target - Path Units. Jan 28 01:44:32.946959 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 28 01:44:32.986945 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:44:33.041894 systemd[1]: Stopped target slices.target - Slice Units. Jan 28 01:44:33.103177 systemd[1]: Stopped target sockets.target - Socket Units. Jan 28 01:44:33.114058 systemd[1]: iscsid.socket: Deactivated successfully. Jan 28 01:44:33.114296 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 01:44:33.173510 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 28 01:44:33.173861 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 01:44:33.242354 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 28 01:44:33.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:33.245986 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 28 01:44:33.262374 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 28 01:44:33.262643 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 01:44:33.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:33.283118 systemd[1]: ignition-files.service: Deactivated successfully. Jan 28 01:44:33.283300 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 28 01:44:33.304090 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 28 01:44:33.510430 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 28 01:44:33.534938 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 28 01:44:33.535177 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:44:33.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:33.695063 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 28 01:44:33.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:33.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:33.695349 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:44:33.758869 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 28 01:44:33.759138 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 01:44:33.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:33.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:33.834437 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 28 01:44:33.834995 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 28 01:44:33.900209 ignition[1098]: INFO : Ignition 2.24.0 Jan 28 01:44:33.900209 ignition[1098]: INFO : Stage: umount Jan 28 01:44:33.900209 ignition[1098]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:44:33.900209 ignition[1098]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:44:33.900209 ignition[1098]: INFO : umount: umount passed Jan 28 01:44:33.900209 ignition[1098]: INFO : Ignition finished successfully Jan 28 01:44:33.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:33.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:33.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:33.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:33.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:33.889563 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 28 01:44:34.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:34.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:33.889866 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 28 01:44:33.912023 systemd[1]: Stopped target network.target - Network. Jan 28 01:44:33.928778 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 28 01:44:33.928912 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 28 01:44:33.947820 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 28 01:44:33.947957 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 28 01:44:34.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:33.961004 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 28 01:44:33.961113 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 28 01:44:34.308000 audit: BPF prog-id=9 op=UNLOAD Jan 28 01:44:34.309000 audit: BPF prog-id=6 op=UNLOAD Jan 28 01:44:33.961232 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 28 01:44:33.961297 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 28 01:44:33.961626 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 28 01:44:33.961847 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 28 01:44:33.974847 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 28 01:44:33.975903 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 28 01:44:33.976061 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 28 01:44:34.170605 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 28 01:44:34.171089 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 28 01:44:34.273812 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 28 01:44:34.275182 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 28 01:44:34.309121 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 28 01:44:34.413014 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 28 01:44:34.413132 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:44:34.429459 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 28 01:44:34.429567 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 28 01:44:34.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:34.513632 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 28 01:44:34.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:34.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:34.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:34.529583 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 28 01:44:34.529761 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 01:44:34.545996 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 01:44:34.546065 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:44:34.561982 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 28 01:44:34.562107 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 28 01:44:34.569080 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:44:34.726894 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 28 01:44:34.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:34.727180 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:44:34.766299 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 28 01:44:34.768520 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 28 01:44:34.818208 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 28 01:44:34.818491 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:44:34.829870 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 28 01:44:34.829954 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 28 01:44:34.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:34.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:34.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:34.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:34.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:34.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:34.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:34.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:34.867360 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 28 01:44:34.867527 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 28 01:44:34.867834 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 01:44:34.867909 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:44:34.874252 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 28 01:44:34.880588 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 28 01:44:34.883023 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 01:44:34.883592 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 28 01:44:34.883763 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:44:34.883872 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 28 01:44:34.883946 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 01:44:34.884055 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 28 01:44:34.884122 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:44:34.884267 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:44:34.884339 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:44:34.889030 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 28 01:44:34.986587 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 28 01:44:35.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:35.249580 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 28 01:44:35.282806 kernel: kauditd_printk_skb: 31 callbacks suppressed Jan 28 01:44:35.282878 kernel: audit: type=1131 audit(1769564675.241:79): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:35.249868 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 28 01:44:35.440587 kernel: audit: type=1130 audit(1769564675.326:80): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:35.440753 kernel: audit: type=1131 audit(1769564675.326:81): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:35.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:35.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:35.340240 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 28 01:44:35.478924 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 28 01:44:35.650804 systemd[1]: Switching root. Jan 28 01:44:35.791134 systemd-journald[321]: Journal stopped Jan 28 01:44:46.742318 systemd-journald[321]: Received SIGTERM from PID 1 (systemd). Jan 28 01:44:46.744066 kernel: audit: type=1335 audit(1769564675.807:82): pid=321 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=kernel comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" nl-mcgrp=1 op=disconnect res=1 Jan 28 01:44:46.744159 kernel: SELinux: policy capability network_peer_controls=1 Jan 28 01:44:46.744186 kernel: SELinux: policy capability open_perms=1 Jan 28 01:44:46.744204 kernel: SELinux: policy capability extended_socket_class=1 Jan 28 01:44:46.744282 kernel: SELinux: policy capability always_check_network=0 Jan 28 01:44:46.744302 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 28 01:44:46.744320 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 28 01:44:46.744348 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 28 01:44:46.744364 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 28 01:44:46.744382 kernel: SELinux: policy capability userspace_initial_context=0 Jan 28 01:44:46.744408 kernel: audit: type=1403 audit(1769564676.697:83): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 28 01:44:46.745807 systemd[1]: Successfully loaded SELinux policy in 424.987ms. Jan 28 01:44:46.745900 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 139.390ms. Jan 28 01:44:46.745926 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 28 01:44:46.745945 systemd[1]: Detected virtualization kvm. Jan 28 01:44:46.745962 systemd[1]: Detected architecture x86-64. Jan 28 01:44:46.745984 systemd[1]: Detected first boot. Jan 28 01:44:46.746002 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 28 01:44:46.746081 kernel: audit: type=1334 audit(1769564677.544:84): prog-id=10 op=LOAD Jan 28 01:44:46.746156 kernel: audit: type=1334 audit(1769564677.548:85): prog-id=10 op=UNLOAD Jan 28 01:44:46.746175 kernel: audit: type=1334 audit(1769564677.548:86): prog-id=11 op=LOAD Jan 28 01:44:46.746191 kernel: audit: type=1334 audit(1769564677.548:87): prog-id=11 op=UNLOAD Jan 28 01:44:46.746210 zram_generator::config[1142]: No configuration found. Jan 28 01:44:46.746273 kernel: Guest personality initialized and is inactive Jan 28 01:44:46.746296 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 28 01:44:46.746388 kernel: Initialized host personality Jan 28 01:44:46.746408 kernel: NET: Registered PF_VSOCK protocol family Jan 28 01:44:46.746425 systemd[1]: Populated /etc with preset unit settings. Jan 28 01:44:46.749080 kernel: audit: type=1334 audit(1769564682.712:88): prog-id=12 op=LOAD Jan 28 01:44:46.749105 kernel: audit: type=1334 audit(1769564682.712:89): prog-id=3 op=UNLOAD Jan 28 01:44:46.749127 kernel: audit: type=1334 audit(1769564682.712:90): prog-id=13 op=LOAD Jan 28 01:44:46.749146 kernel: audit: type=1334 audit(1769564682.712:91): prog-id=14 op=LOAD Jan 28 01:44:46.749233 kernel: audit: type=1334 audit(1769564682.712:92): prog-id=4 op=UNLOAD Jan 28 01:44:46.749260 kernel: audit: type=1334 audit(1769564682.712:93): prog-id=5 op=UNLOAD Jan 28 01:44:46.749282 kernel: audit: type=1131 audit(1769564682.731:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:46.749304 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 28 01:44:46.749374 kernel: audit: type=1334 audit(1769564682.799:95): prog-id=12 op=UNLOAD Jan 28 01:44:46.749399 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 28 01:44:46.749420 kernel: audit: type=1130 audit(1769564682.845:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:46.749531 kernel: audit: type=1131 audit(1769564682.845:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:46.749554 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 28 01:44:46.749586 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 28 01:44:46.749607 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 28 01:44:46.749628 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 28 01:44:46.749838 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 28 01:44:46.749861 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 28 01:44:46.749932 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 28 01:44:46.750001 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 28 01:44:46.750021 systemd[1]: Created slice user.slice - User and Session Slice. Jan 28 01:44:46.750040 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:44:46.750061 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:44:46.750140 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 28 01:44:46.750161 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 28 01:44:46.750230 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 28 01:44:46.750260 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 01:44:46.750282 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 28 01:44:46.750355 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:44:46.750380 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:44:46.750501 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 28 01:44:46.750524 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 28 01:44:46.750544 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 28 01:44:46.750561 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 28 01:44:46.750579 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:44:46.750599 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 01:44:46.750619 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 28 01:44:46.750780 systemd[1]: Reached target slices.target - Slice Units. Jan 28 01:44:46.750803 systemd[1]: Reached target swap.target - Swaps. Jan 28 01:44:46.750821 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 28 01:44:46.750843 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 28 01:44:46.750863 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 28 01:44:46.750881 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 28 01:44:46.750898 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 28 01:44:46.750969 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:44:46.750991 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 28 01:44:46.751013 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 28 01:44:46.751034 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 01:44:46.751052 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:44:46.751072 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 28 01:44:46.751092 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 28 01:44:46.751169 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 28 01:44:46.751190 systemd[1]: Mounting media.mount - External Media Directory... Jan 28 01:44:46.751208 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:44:46.751227 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 28 01:44:46.751248 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 28 01:44:46.751265 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 28 01:44:46.751284 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 28 01:44:46.751354 systemd[1]: Reached target machines.target - Containers. Jan 28 01:44:46.751376 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 28 01:44:46.751398 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:44:46.751417 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 01:44:46.755234 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 28 01:44:46.755275 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:44:46.755299 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 01:44:46.755385 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:44:46.755413 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 28 01:44:46.755484 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:44:46.755518 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 28 01:44:46.755541 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 28 01:44:46.755565 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 28 01:44:46.755587 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 28 01:44:46.756063 systemd[1]: Stopped systemd-fsck-usr.service. Jan 28 01:44:46.756088 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 01:44:46.756171 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 01:44:46.756375 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 01:44:46.756401 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 01:44:46.756422 kernel: fuse: init (API version 7.41) Jan 28 01:44:46.756502 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 28 01:44:46.756586 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 28 01:44:46.756612 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 01:44:46.756634 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:44:46.756656 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 28 01:44:46.756777 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 28 01:44:46.756802 systemd[1]: Mounted media.mount - External Media Directory. Jan 28 01:44:46.756823 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 28 01:44:46.756904 kernel: ACPI: bus type drm_connector registered Jan 28 01:44:46.756925 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 28 01:44:46.756994 systemd-journald[1228]: Collecting audit messages is enabled. Jan 28 01:44:46.757035 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 28 01:44:46.757123 systemd-journald[1228]: Journal started Jan 28 01:44:46.759082 systemd-journald[1228]: Runtime Journal (/run/log/journal/419149a44f964b778968df4813c218c8) is 6M, max 48.2M, 42.1M free. Jan 28 01:44:44.445000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 28 01:44:45.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:45.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:45.955000 audit: BPF prog-id=14 op=UNLOAD Jan 28 01:44:45.955000 audit: BPF prog-id=13 op=UNLOAD Jan 28 01:44:45.962000 audit: BPF prog-id=15 op=LOAD Jan 28 01:44:45.998000 audit: BPF prog-id=16 op=LOAD Jan 28 01:44:46.044000 audit: BPF prog-id=17 op=LOAD Jan 28 01:44:46.730000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 28 01:44:46.730000 audit[1228]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffe3d0f4970 a2=4000 a3=0 items=0 ppid=1 pid=1228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:44:46.730000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 28 01:44:42.674152 systemd[1]: Queued start job for default target multi-user.target. Jan 28 01:44:42.715123 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 28 01:44:42.726627 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 28 01:44:42.733608 systemd[1]: systemd-journald.service: Consumed 2.418s CPU time. Jan 28 01:44:46.781965 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 01:44:46.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:46.799004 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 28 01:44:46.836378 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:44:46.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:46.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:46.865927 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 28 01:44:46.868855 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 28 01:44:46.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:46.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:46.883123 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:44:46.883505 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:44:46.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:46.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:46.890386 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 01:44:46.890858 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 01:44:46.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:46.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:46.906287 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:44:46.909556 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:44:46.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:46.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:46.942888 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 28 01:44:46.943479 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 28 01:44:46.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:46.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:46.954584 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:44:46.956881 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:44:46.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:46.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:46.968362 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 01:44:47.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:47.136078 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 01:44:47.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:47.234883 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 28 01:44:47.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:47.286310 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 28 01:44:47.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:47.366962 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:44:47.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:47.831654 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 01:44:47.848961 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 28 01:44:47.866180 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 28 01:44:47.891835 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 28 01:44:47.945888 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 28 01:44:47.950144 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 01:44:47.973802 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 28 01:44:48.010325 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:44:48.012768 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 28 01:44:48.035236 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 28 01:44:48.062190 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 28 01:44:48.085927 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 01:44:48.147401 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 28 01:44:48.173911 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 01:44:48.194977 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 01:44:48.234867 systemd-journald[1228]: Time spent on flushing to /var/log/journal/419149a44f964b778968df4813c218c8 is 193.564ms for 1141 entries. Jan 28 01:44:48.234867 systemd-journald[1228]: System Journal (/var/log/journal/419149a44f964b778968df4813c218c8) is 8M, max 163.5M, 155.5M free. Jan 28 01:44:48.532319 systemd-journald[1228]: Received client request to flush runtime journal. Jan 28 01:44:48.532505 kernel: kauditd_printk_skb: 31 callbacks suppressed Jan 28 01:44:48.532571 kernel: audit: type=1130 audit(1769564688.459:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:48.532616 kernel: loop1: detected capacity change from 0 to 50784 Jan 28 01:44:48.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:48.235977 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 28 01:44:48.296512 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 01:44:48.361402 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 28 01:44:48.410176 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 28 01:44:48.433526 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 28 01:44:48.487616 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 28 01:44:48.568040 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 28 01:44:48.604362 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 28 01:44:48.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:49.686388 kernel: audit: type=1130 audit(1769564688.659:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:49.678155 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Jan 28 01:44:49.678177 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Jan 28 01:44:49.731976 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 01:44:49.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:49.840782 kernel: audit: type=1130 audit(1769564689.772:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:49.872033 kernel: loop2: detected capacity change from 0 to 229808 Jan 28 01:44:49.849950 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:44:49.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:49.992829 kernel: audit: type=1130 audit(1769564689.907:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:49.991012 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 28 01:44:50.154584 kernel: loop3: detected capacity change from 0 to 111560 Jan 28 01:44:50.454405 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 28 01:44:50.458993 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 28 01:44:50.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:50.549394 kernel: audit: type=1130 audit(1769564690.488:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:50.626600 kernel: loop4: detected capacity change from 0 to 50784 Jan 28 01:44:50.771008 kernel: loop5: detected capacity change from 0 to 229808 Jan 28 01:44:51.117114 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 28 01:44:51.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:51.242383 kernel: audit: type=1130 audit(1769564691.169:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:51.242572 kernel: audit: type=1334 audit(1769564691.188:133): prog-id=18 op=LOAD Jan 28 01:44:51.188000 audit: BPF prog-id=18 op=LOAD Jan 28 01:44:51.241010 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 28 01:44:51.267010 kernel: loop6: detected capacity change from 0 to 111560 Jan 28 01:44:51.190000 audit: BPF prog-id=19 op=LOAD Jan 28 01:44:51.190000 audit: BPF prog-id=20 op=LOAD Jan 28 01:44:51.270608 kernel: audit: type=1334 audit(1769564691.190:134): prog-id=19 op=LOAD Jan 28 01:44:51.270787 kernel: audit: type=1334 audit(1769564691.190:135): prog-id=20 op=LOAD Jan 28 01:44:51.360942 kernel: audit: type=1334 audit(1769564691.352:136): prog-id=21 op=LOAD Jan 28 01:44:51.352000 audit: BPF prog-id=21 op=LOAD Jan 28 01:44:51.359891 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 01:44:51.382769 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 01:44:51.414000 audit: BPF prog-id=22 op=LOAD Jan 28 01:44:51.414000 audit: BPF prog-id=23 op=LOAD Jan 28 01:44:51.414000 audit: BPF prog-id=24 op=LOAD Jan 28 01:44:51.430641 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 28 01:44:51.466000 audit: BPF prog-id=25 op=LOAD Jan 28 01:44:51.468000 audit: BPF prog-id=26 op=LOAD Jan 28 01:44:51.468000 audit: BPF prog-id=27 op=LOAD Jan 28 01:44:51.473946 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 28 01:44:51.510891 (sd-merge)[1287]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 28 01:44:51.852980 (sd-merge)[1287]: Merged extensions into '/usr'. Jan 28 01:44:52.119562 systemd[1]: Reload requested from client PID 1264 ('systemd-sysext') (unit systemd-sysext.service)... Jan 28 01:44:52.126279 systemd[1]: Reloading... Jan 28 01:44:52.300334 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Jan 28 01:44:52.300404 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Jan 28 01:44:52.390969 systemd-nsresourced[1292]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 28 01:44:53.100597 zram_generator::config[1337]: No configuration found. Jan 28 01:44:53.960156 systemd-oomd[1289]: No swap; memory pressure usage will be degraded Jan 28 01:44:54.062328 systemd-resolved[1290]: Positive Trust Anchors: Jan 28 01:44:54.062350 systemd-resolved[1290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 01:44:54.062357 systemd-resolved[1290]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 28 01:44:54.062400 systemd-resolved[1290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 01:44:54.150366 systemd-resolved[1290]: Defaulting to hostname 'linux'. Jan 28 01:44:54.505220 systemd[1]: Reloading finished in 2360 ms. Jan 28 01:44:54.637246 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 28 01:44:54.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:54.666853 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 28 01:44:54.671582 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 28 01:44:54.671819 kernel: audit: type=1130 audit(1769564694.658:143): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:54.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:54.705109 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 28 01:44:54.747834 kernel: audit: type=1130 audit(1769564694.697:144): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:54.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:54.773299 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 01:44:54.805933 kernel: audit: type=1130 audit(1769564694.767:145): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:54.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:54.832949 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 28 01:44:54.861985 kernel: audit: type=1130 audit(1769564694.822:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:54.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:54.880952 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 28 01:44:54.899088 kernel: audit: type=1130 audit(1769564694.866:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:54.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:54.933942 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:44:54.946040 kernel: audit: type=1130 audit(1769564694.908:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:54.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:54.986843 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:44:54.994393 kernel: audit: type=1130 audit(1769564694.950:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:55.018164 systemd[1]: Starting ensure-sysext.service... Jan 28 01:44:55.041148 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 01:44:55.063000 audit: BPF prog-id=8 op=UNLOAD Jan 28 01:44:55.063000 audit: BPF prog-id=7 op=UNLOAD Jan 28 01:44:55.065000 audit: BPF prog-id=28 op=LOAD Jan 28 01:44:55.065000 audit: BPF prog-id=29 op=LOAD Jan 28 01:44:55.083778 kernel: audit: type=1334 audit(1769564695.063:150): prog-id=8 op=UNLOAD Jan 28 01:44:55.083830 kernel: audit: type=1334 audit(1769564695.063:151): prog-id=7 op=UNLOAD Jan 28 01:44:55.083852 kernel: audit: type=1334 audit(1769564695.065:152): prog-id=28 op=LOAD Jan 28 01:44:55.084415 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:44:55.153000 audit: BPF prog-id=30 op=LOAD Jan 28 01:44:55.153000 audit: BPF prog-id=15 op=UNLOAD Jan 28 01:44:55.153000 audit: BPF prog-id=31 op=LOAD Jan 28 01:44:55.153000 audit: BPF prog-id=32 op=LOAD Jan 28 01:44:55.153000 audit: BPF prog-id=16 op=UNLOAD Jan 28 01:44:55.153000 audit: BPF prog-id=17 op=UNLOAD Jan 28 01:44:55.161000 audit: BPF prog-id=33 op=LOAD Jan 28 01:44:55.161000 audit: BPF prog-id=25 op=UNLOAD Jan 28 01:44:55.161000 audit: BPF prog-id=34 op=LOAD Jan 28 01:44:55.161000 audit: BPF prog-id=35 op=LOAD Jan 28 01:44:55.161000 audit: BPF prog-id=26 op=UNLOAD Jan 28 01:44:55.161000 audit: BPF prog-id=27 op=UNLOAD Jan 28 01:44:55.172000 audit: BPF prog-id=36 op=LOAD Jan 28 01:44:55.172000 audit: BPF prog-id=22 op=UNLOAD Jan 28 01:44:55.172000 audit: BPF prog-id=37 op=LOAD Jan 28 01:44:55.172000 audit: BPF prog-id=38 op=LOAD Jan 28 01:44:55.172000 audit: BPF prog-id=23 op=UNLOAD Jan 28 01:44:55.172000 audit: BPF prog-id=24 op=UNLOAD Jan 28 01:44:55.176000 audit: BPF prog-id=39 op=LOAD Jan 28 01:44:55.176000 audit: BPF prog-id=18 op=UNLOAD Jan 28 01:44:55.176000 audit: BPF prog-id=40 op=LOAD Jan 28 01:44:55.183000 audit: BPF prog-id=41 op=LOAD Jan 28 01:44:55.183000 audit: BPF prog-id=19 op=UNLOAD Jan 28 01:44:55.183000 audit: BPF prog-id=20 op=UNLOAD Jan 28 01:44:55.189000 audit: BPF prog-id=42 op=LOAD Jan 28 01:44:55.189000 audit: BPF prog-id=21 op=UNLOAD Jan 28 01:44:55.220010 systemd[1]: Reload requested from client PID 1375 ('systemctl') (unit ensure-sysext.service)... Jan 28 01:44:55.220033 systemd[1]: Reloading... Jan 28 01:44:55.261562 systemd-tmpfiles[1376]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 28 01:44:55.261650 systemd-tmpfiles[1376]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 28 01:44:55.262901 systemd-tmpfiles[1376]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 28 01:44:55.272230 systemd-tmpfiles[1376]: ACLs are not supported, ignoring. Jan 28 01:44:55.272356 systemd-tmpfiles[1376]: ACLs are not supported, ignoring. Jan 28 01:44:55.329182 systemd-tmpfiles[1376]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 01:44:55.333615 systemd-tmpfiles[1376]: Skipping /boot Jan 28 01:44:55.353960 systemd-udevd[1377]: Using default interface naming scheme 'v257'. Jan 28 01:44:55.467014 systemd-tmpfiles[1376]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 01:44:55.467040 systemd-tmpfiles[1376]: Skipping /boot Jan 28 01:44:55.560054 zram_generator::config[1411]: No configuration found. Jan 28 01:44:56.000834 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 28 01:44:56.007771 kernel: ACPI: button: Power Button [PWRF] Jan 28 01:44:56.021800 kernel: mousedev: PS/2 mouse device common for all mice Jan 28 01:44:56.064832 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 28 01:44:56.065308 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 28 01:44:56.103386 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 01:44:56.114295 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 28 01:44:56.117000 systemd[1]: Reloading finished in 894 ms. Jan 28 01:44:56.135208 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:44:56.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:56.162904 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:44:56.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:56.178000 audit: BPF prog-id=43 op=LOAD Jan 28 01:44:56.180000 audit: BPF prog-id=39 op=UNLOAD Jan 28 01:44:56.180000 audit: BPF prog-id=44 op=LOAD Jan 28 01:44:56.182000 audit: BPF prog-id=45 op=LOAD Jan 28 01:44:56.183000 audit: BPF prog-id=40 op=UNLOAD Jan 28 01:44:56.183000 audit: BPF prog-id=41 op=UNLOAD Jan 28 01:44:56.194000 audit: BPF prog-id=46 op=LOAD Jan 28 01:44:56.194000 audit: BPF prog-id=33 op=UNLOAD Jan 28 01:44:56.195000 audit: BPF prog-id=47 op=LOAD Jan 28 01:44:56.195000 audit: BPF prog-id=48 op=LOAD Jan 28 01:44:56.195000 audit: BPF prog-id=34 op=UNLOAD Jan 28 01:44:56.195000 audit: BPF prog-id=35 op=UNLOAD Jan 28 01:44:56.198000 audit: BPF prog-id=49 op=LOAD Jan 28 01:44:56.198000 audit: BPF prog-id=42 op=UNLOAD Jan 28 01:44:56.202000 audit: BPF prog-id=50 op=LOAD Jan 28 01:44:56.203000 audit: BPF prog-id=51 op=LOAD Jan 28 01:44:56.203000 audit: BPF prog-id=28 op=UNLOAD Jan 28 01:44:56.203000 audit: BPF prog-id=29 op=UNLOAD Jan 28 01:44:56.204000 audit: BPF prog-id=52 op=LOAD Jan 28 01:44:56.205000 audit: BPF prog-id=30 op=UNLOAD Jan 28 01:44:56.205000 audit: BPF prog-id=53 op=LOAD Jan 28 01:44:56.205000 audit: BPF prog-id=54 op=LOAD Jan 28 01:44:56.205000 audit: BPF prog-id=31 op=UNLOAD Jan 28 01:44:56.205000 audit: BPF prog-id=32 op=UNLOAD Jan 28 01:44:56.206000 audit: BPF prog-id=55 op=LOAD Jan 28 01:44:56.208000 audit: BPF prog-id=36 op=UNLOAD Jan 28 01:44:56.208000 audit: BPF prog-id=56 op=LOAD Jan 28 01:44:56.208000 audit: BPF prog-id=57 op=LOAD Jan 28 01:44:56.208000 audit: BPF prog-id=37 op=UNLOAD Jan 28 01:44:56.208000 audit: BPF prog-id=38 op=UNLOAD Jan 28 01:44:56.338761 systemd[1]: Finished ensure-sysext.service. Jan 28 01:44:56.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:56.360529 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:44:56.367029 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 28 01:44:56.386138 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 28 01:44:56.407971 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:44:56.430052 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:44:56.440057 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 01:44:56.452828 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:44:56.505838 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:44:56.525333 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:44:56.526286 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 28 01:44:56.532345 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 28 01:44:56.551951 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 28 01:44:56.563120 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 01:44:56.575142 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 28 01:44:56.597000 audit: BPF prog-id=58 op=LOAD Jan 28 01:44:56.611988 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 01:44:56.640000 audit: BPF prog-id=59 op=LOAD Jan 28 01:44:56.650216 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 28 01:44:56.659541 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 28 01:44:56.678980 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:44:56.685385 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:44:56.688108 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:44:56.689220 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:44:56.696095 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 01:44:56.696619 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 01:44:56.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:56.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:56.703382 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:44:56.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:56.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:56.741065 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:44:56.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:56.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:56.749249 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:44:56.749875 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:44:56.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:56.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:56.761583 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 28 01:44:56.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:44:56.795176 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 01:44:56.795294 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 01:44:56.803000 audit[1520]: SYSTEM_BOOT pid=1520 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 28 01:44:56.807000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 28 01:44:56.807000 audit[1523]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffee86103a0 a2=420 a3=0 items=0 ppid=1490 pid=1523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:44:56.807000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 01:44:56.811019 augenrules[1523]: No rules Jan 28 01:44:56.831941 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 01:44:56.832440 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 28 01:44:56.833514 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 28 01:44:56.853940 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 28 01:44:56.960012 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 28 01:44:56.961352 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 28 01:44:57.233069 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 28 01:44:57.235882 systemd[1]: Reached target time-set.target - System Time Set. Jan 28 01:44:57.398176 systemd-networkd[1515]: lo: Link UP Jan 28 01:44:57.398193 systemd-networkd[1515]: lo: Gained carrier Jan 28 01:44:57.425905 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 01:44:57.426434 systemd-networkd[1515]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 01:44:57.435829 systemd-networkd[1515]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 01:44:57.457140 systemd-networkd[1515]: eth0: Link UP Jan 28 01:44:57.465255 systemd-networkd[1515]: eth0: Gained carrier Jan 28 01:44:57.465417 systemd-networkd[1515]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 01:44:57.673106 systemd-networkd[1515]: eth0: DHCPv4 address 10.0.0.85/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 28 01:44:57.674780 systemd-timesyncd[1519]: Network configuration changed, trying to establish connection. Jan 28 01:44:57.684517 systemd-timesyncd[1519]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 28 01:44:57.684858 systemd-timesyncd[1519]: Initial clock synchronization to Wed 2026-01-28 01:44:57.710202 UTC. Jan 28 01:44:57.867356 systemd[1]: Reached target network.target - Network. Jan 28 01:44:57.889348 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 28 01:44:57.907390 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 28 01:44:57.924343 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:44:58.040376 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 28 01:44:58.961350 systemd-networkd[1515]: eth0: Gained IPv6LL Jan 28 01:44:58.971165 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 28 01:44:59.006890 systemd[1]: Reached target network-online.target - Network is Online. Jan 28 01:44:59.568585 ldconfig[1504]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 28 01:44:59.575031 kernel: kvm_amd: TSC scaling supported Jan 28 01:44:59.582508 kernel: kvm_amd: Nested Virtualization enabled Jan 28 01:44:59.582568 kernel: kvm_amd: Nested Paging enabled Jan 28 01:44:59.582596 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 28 01:44:59.586230 kernel: kvm_amd: PMU virtualization is disabled Jan 28 01:44:59.608980 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 28 01:44:59.619067 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 28 01:44:59.706457 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 28 01:44:59.724309 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 01:44:59.742560 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 28 01:44:59.757491 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 28 01:44:59.776488 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 28 01:44:59.806578 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 28 01:44:59.821259 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 28 01:44:59.834529 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 28 01:44:59.853185 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 28 01:44:59.865887 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 28 01:44:59.877316 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 28 01:44:59.877415 systemd[1]: Reached target paths.target - Path Units. Jan 28 01:44:59.891089 systemd[1]: Reached target timers.target - Timer Units. Jan 28 01:44:59.913918 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 28 01:44:59.949994 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 28 01:44:59.974299 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 28 01:45:00.001510 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 28 01:45:00.030310 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 28 01:45:00.054817 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 28 01:45:00.066647 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 28 01:45:00.081745 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 28 01:45:00.099480 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 01:45:00.105184 systemd[1]: Reached target basic.target - Basic System. Jan 28 01:45:00.110008 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 28 01:45:00.110104 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 28 01:45:00.114499 systemd[1]: Starting containerd.service - containerd container runtime... Jan 28 01:45:00.129244 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 28 01:45:00.147570 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 28 01:45:00.232428 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 28 01:45:00.252202 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 28 01:45:00.271998 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 28 01:45:00.278224 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 28 01:45:00.289024 jq[1560]: false Jan 28 01:45:00.288887 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 28 01:45:00.306161 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:45:00.326630 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 28 01:45:00.326978 oslogin_cache_refresh[1562]: Refreshing passwd entry cache Jan 28 01:45:00.328278 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Refreshing passwd entry cache Jan 28 01:45:00.336775 extend-filesystems[1561]: Found /dev/vda6 Jan 28 01:45:00.347018 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 28 01:45:00.353402 extend-filesystems[1561]: Found /dev/vda9 Jan 28 01:45:00.360844 extend-filesystems[1561]: Checking size of /dev/vda9 Jan 28 01:45:00.367895 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Failure getting users, quitting Jan 28 01:45:00.367895 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 28 01:45:00.367829 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 28 01:45:00.367451 oslogin_cache_refresh[1562]: Failure getting users, quitting Jan 28 01:45:00.367482 oslogin_cache_refresh[1562]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 28 01:45:00.377740 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Refreshing group entry cache Jan 28 01:45:00.377657 oslogin_cache_refresh[1562]: Refreshing group entry cache Jan 28 01:45:00.380996 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 28 01:45:00.402337 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 28 01:45:00.406234 extend-filesystems[1561]: Resized partition /dev/vda9 Jan 28 01:45:00.419333 oslogin_cache_refresh[1562]: Failure getting groups, quitting Jan 28 01:45:00.451365 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Failure getting groups, quitting Jan 28 01:45:00.451365 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 28 01:45:00.419361 oslogin_cache_refresh[1562]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 28 01:45:00.456143 extend-filesystems[1580]: resize2fs 1.47.3 (8-Jul-2025) Jan 28 01:45:00.514255 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 28 01:45:00.510605 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 28 01:45:00.525504 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 28 01:45:00.526803 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 28 01:45:00.528855 systemd[1]: Starting update-engine.service - Update Engine... Jan 28 01:45:00.536795 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 28 01:45:00.576303 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 28 01:45:00.578439 jq[1590]: true Jan 28 01:45:00.600871 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 28 01:45:00.604436 update_engine[1589]: I20260128 01:45:00.585974 1589 main.cc:92] Flatcar Update Engine starting Jan 28 01:45:00.602252 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 28 01:45:00.607422 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 28 01:45:00.608035 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 28 01:45:00.631782 systemd[1]: motdgen.service: Deactivated successfully. Jan 28 01:45:00.638848 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 28 01:45:00.655936 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 28 01:45:00.680833 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 28 01:45:00.681549 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 28 01:45:00.682742 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 28 01:45:00.745288 jq[1608]: true Jan 28 01:45:00.751811 extend-filesystems[1580]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 28 01:45:00.751811 extend-filesystems[1580]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 28 01:45:00.751811 extend-filesystems[1580]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 28 01:45:00.797170 extend-filesystems[1561]: Resized filesystem in /dev/vda9 Jan 28 01:45:00.769897 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 28 01:45:00.771928 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 28 01:45:00.803458 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 28 01:45:00.804051 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 28 01:45:00.852958 tar[1607]: linux-amd64/LICENSE Jan 28 01:45:00.856929 tar[1607]: linux-amd64/helm Jan 28 01:45:00.874596 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 28 01:45:00.883121 systemd-logind[1586]: Watching system buttons on /dev/input/event2 (Power Button) Jan 28 01:45:00.883222 systemd-logind[1586]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 28 01:45:00.890061 systemd-logind[1586]: New seat seat0. Jan 28 01:45:00.895640 systemd[1]: Started systemd-logind.service - User Login Management. Jan 28 01:45:00.961955 dbus-daemon[1558]: [system] SELinux support is enabled Jan 28 01:45:00.963266 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 28 01:45:00.982858 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 28 01:45:01.029435 bash[1643]: Updated "/home/core/.ssh/authorized_keys" Jan 28 01:45:00.982941 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 28 01:45:00.997298 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 28 01:45:00.997333 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 28 01:45:01.014372 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 28 01:45:01.048086 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 28 01:45:01.068976 update_engine[1589]: I20260128 01:45:01.068629 1589 update_check_scheduler.cc:74] Next update check in 9m15s Jan 28 01:45:01.069332 dbus-daemon[1558]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 28 01:45:01.069482 systemd[1]: Started update-engine.service - Update Engine. Jan 28 01:45:01.086354 sshd_keygen[1598]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 28 01:45:01.097509 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 28 01:45:01.168998 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 28 01:45:01.195047 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 28 01:45:01.316329 systemd[1]: issuegen.service: Deactivated successfully. Jan 28 01:45:01.318590 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 28 01:45:01.349608 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 28 01:45:01.384277 locksmithd[1652]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 28 01:45:01.387239 kernel: EDAC MC: Ver: 3.0.0 Jan 28 01:45:01.437761 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 28 01:45:01.470006 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 28 01:45:01.501888 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 28 01:45:01.520825 systemd[1]: Reached target getty.target - Login Prompts. Jan 28 01:45:01.594836 containerd[1609]: time="2026-01-28T01:45:01Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 28 01:45:01.597117 containerd[1609]: time="2026-01-28T01:45:01.597037202Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 28 01:45:01.642927 containerd[1609]: time="2026-01-28T01:45:01.642788012Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.61µs" Jan 28 01:45:01.642927 containerd[1609]: time="2026-01-28T01:45:01.642868321Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 28 01:45:01.642927 containerd[1609]: time="2026-01-28T01:45:01.642930242Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 28 01:45:01.643096 containerd[1609]: time="2026-01-28T01:45:01.642948379Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 28 01:45:01.643248 containerd[1609]: time="2026-01-28T01:45:01.643185746Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 28 01:45:01.643248 containerd[1609]: time="2026-01-28T01:45:01.643210892Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 28 01:45:01.643393 containerd[1609]: time="2026-01-28T01:45:01.643369023Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 28 01:45:01.643393 containerd[1609]: time="2026-01-28T01:45:01.643388744Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 28 01:45:01.643865 containerd[1609]: time="2026-01-28T01:45:01.643833231Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 28 01:45:01.643865 containerd[1609]: time="2026-01-28T01:45:01.643860491Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 28 01:45:01.643969 containerd[1609]: time="2026-01-28T01:45:01.643877816Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 28 01:45:01.643969 containerd[1609]: time="2026-01-28T01:45:01.643890380Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 28 01:45:01.648264 containerd[1609]: time="2026-01-28T01:45:01.647222275Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 28 01:45:01.648264 containerd[1609]: time="2026-01-28T01:45:01.647245786Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 28 01:45:01.648264 containerd[1609]: time="2026-01-28T01:45:01.647438477Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 28 01:45:01.648264 containerd[1609]: time="2026-01-28T01:45:01.647829203Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 28 01:45:01.648264 containerd[1609]: time="2026-01-28T01:45:01.647871495Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 28 01:45:01.648264 containerd[1609]: time="2026-01-28T01:45:01.647884918Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 28 01:45:01.648264 containerd[1609]: time="2026-01-28T01:45:01.647927429Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 28 01:45:01.648264 containerd[1609]: time="2026-01-28T01:45:01.648174773Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 28 01:45:01.649101 containerd[1609]: time="2026-01-28T01:45:01.648272296Z" level=info msg="metadata content store policy set" policy=shared Jan 28 01:45:01.688145 containerd[1609]: time="2026-01-28T01:45:01.687283675Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 28 01:45:01.688145 containerd[1609]: time="2026-01-28T01:45:01.687792088Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 28 01:45:01.688145 containerd[1609]: time="2026-01-28T01:45:01.687925505Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 28 01:45:01.688145 containerd[1609]: time="2026-01-28T01:45:01.687950931Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 28 01:45:01.688145 containerd[1609]: time="2026-01-28T01:45:01.687971695Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 28 01:45:01.688145 containerd[1609]: time="2026-01-28T01:45:01.687990644Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 28 01:45:01.688145 containerd[1609]: time="2026-01-28T01:45:01.688005904Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 28 01:45:01.688145 containerd[1609]: time="2026-01-28T01:45:01.688022446Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 28 01:45:01.688145 containerd[1609]: time="2026-01-28T01:45:01.688039432Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 28 01:45:01.688145 containerd[1609]: time="2026-01-28T01:45:01.688057468Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 28 01:45:01.688145 containerd[1609]: time="2026-01-28T01:45:01.688071855Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 28 01:45:01.688145 containerd[1609]: time="2026-01-28T01:45:01.688087125Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 28 01:45:01.688145 containerd[1609]: time="2026-01-28T01:45:01.688102315Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 28 01:45:01.688145 containerd[1609]: time="2026-01-28T01:45:01.688131270Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 28 01:45:01.688775 containerd[1609]: time="2026-01-28T01:45:01.688311178Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 28 01:45:01.688775 containerd[1609]: time="2026-01-28T01:45:01.688401673Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 28 01:45:01.688775 containerd[1609]: time="2026-01-28T01:45:01.688421656Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 28 01:45:01.688775 containerd[1609]: time="2026-01-28T01:45:01.688436764Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 28 01:45:01.688775 containerd[1609]: time="2026-01-28T01:45:01.688452114Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 28 01:45:01.688775 containerd[1609]: time="2026-01-28T01:45:01.688466060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 28 01:45:01.688775 containerd[1609]: time="2026-01-28T01:45:01.688480969Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 28 01:45:01.688775 containerd[1609]: time="2026-01-28T01:45:01.688498004Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 28 01:45:01.688775 containerd[1609]: time="2026-01-28T01:45:01.688512842Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 28 01:45:01.688775 containerd[1609]: time="2026-01-28T01:45:01.688527751Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 28 01:45:01.688775 containerd[1609]: time="2026-01-28T01:45:01.688541056Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 28 01:45:01.688775 containerd[1609]: time="2026-01-28T01:45:01.688577049Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 28 01:45:01.688775 containerd[1609]: time="2026-01-28T01:45:01.688623490Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 28 01:45:01.688775 containerd[1609]: time="2026-01-28T01:45:01.688639752Z" level=info msg="Start snapshots syncer" Jan 28 01:45:01.688775 containerd[1609]: time="2026-01-28T01:45:01.688780177Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 28 01:45:01.689193 containerd[1609]: time="2026-01-28T01:45:01.689091859Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 28 01:45:01.689193 containerd[1609]: time="2026-01-28T01:45:01.689166171Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 28 01:45:01.689593 containerd[1609]: time="2026-01-28T01:45:01.689221256Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 28 01:45:01.694049 containerd[1609]: time="2026-01-28T01:45:01.691528767Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 28 01:45:01.694049 containerd[1609]: time="2026-01-28T01:45:01.691565582Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 28 01:45:01.695258 containerd[1609]: time="2026-01-28T01:45:01.694457735Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 28 01:45:01.695258 containerd[1609]: time="2026-01-28T01:45:01.694488044Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 28 01:45:01.695258 containerd[1609]: time="2026-01-28T01:45:01.694511645Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 28 01:45:01.695258 containerd[1609]: time="2026-01-28T01:45:01.694537062Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 28 01:45:01.695258 containerd[1609]: time="2026-01-28T01:45:01.694562187Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 28 01:45:01.695258 containerd[1609]: time="2026-01-28T01:45:01.694579473Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 28 01:45:01.695258 containerd[1609]: time="2026-01-28T01:45:01.694595784Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 28 01:45:01.695258 containerd[1609]: time="2026-01-28T01:45:01.695044594Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 28 01:45:01.695258 containerd[1609]: time="2026-01-28T01:45:01.695078311Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 28 01:45:01.695258 containerd[1609]: time="2026-01-28T01:45:01.695091927Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 28 01:45:01.695258 containerd[1609]: time="2026-01-28T01:45:01.695109844Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 28 01:45:01.695258 containerd[1609]: time="2026-01-28T01:45:01.695120832Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 28 01:45:01.695258 containerd[1609]: time="2026-01-28T01:45:01.695137926Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 28 01:45:01.695258 containerd[1609]: time="2026-01-28T01:45:01.695153838Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 28 01:45:01.695908 containerd[1609]: time="2026-01-28T01:45:01.695172006Z" level=info msg="runtime interface created" Jan 28 01:45:01.695908 containerd[1609]: time="2026-01-28T01:45:01.695179425Z" level=info msg="created NRI interface" Jan 28 01:45:01.695908 containerd[1609]: time="2026-01-28T01:45:01.695190633Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 28 01:45:01.696565 containerd[1609]: time="2026-01-28T01:45:01.696038509Z" level=info msg="Connect containerd service" Jan 28 01:45:01.696565 containerd[1609]: time="2026-01-28T01:45:01.696083145Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 28 01:45:01.706119 containerd[1609]: time="2026-01-28T01:45:01.706087386Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 01:45:04.383983 containerd[1609]: time="2026-01-28T01:45:04.381565830Z" level=info msg="Start subscribing containerd event" Jan 28 01:45:04.383983 containerd[1609]: time="2026-01-28T01:45:04.383401022Z" level=info msg="Start recovering state" Jan 28 01:45:04.383983 containerd[1609]: time="2026-01-28T01:45:04.385802568Z" level=info msg="Start event monitor" Jan 28 01:45:04.383983 containerd[1609]: time="2026-01-28T01:45:04.385854209Z" level=info msg="Start cni network conf syncer for default" Jan 28 01:45:04.383983 containerd[1609]: time="2026-01-28T01:45:04.385867443Z" level=info msg="Start streaming server" Jan 28 01:45:04.383983 containerd[1609]: time="2026-01-28T01:45:04.385924187Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 28 01:45:04.383983 containerd[1609]: time="2026-01-28T01:45:04.385978685Z" level=info msg="runtime interface starting up..." Jan 28 01:45:04.383983 containerd[1609]: time="2026-01-28T01:45:04.385990845Z" level=info msg="starting plugins..." Jan 28 01:45:04.383983 containerd[1609]: time="2026-01-28T01:45:04.386025313Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 28 01:45:04.581866 containerd[1609]: time="2026-01-28T01:45:04.391572571Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 28 01:45:04.581866 containerd[1609]: time="2026-01-28T01:45:04.391901635Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 28 01:45:04.581866 containerd[1609]: time="2026-01-28T01:45:04.409085226Z" level=info msg="containerd successfully booted in 2.817052s" Jan 28 01:45:04.516521 systemd[1]: Started containerd.service - containerd container runtime. Jan 28 01:45:04.990130 tar[1607]: linux-amd64/README.md Jan 28 01:45:05.068378 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 28 01:45:08.262209 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:45:08.268887 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 28 01:45:08.277854 systemd[1]: Startup finished in 15.866s (kernel) + 45.607s (initrd) + 31.984s (userspace) = 1min 33.458s. Jan 28 01:45:08.357520 (kubelet)[1700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:45:08.841428 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 28 01:45:08.846386 systemd[1]: Started sshd@0-10.0.0.85:22-10.0.0.1:41208.service - OpenSSH per-connection server daemon (10.0.0.1:41208). Jan 28 01:45:09.218364 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 41208 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:45:09.233897 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:45:09.268891 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 28 01:45:09.271615 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 28 01:45:09.288405 systemd-logind[1586]: New session 1 of user core. Jan 28 01:45:09.327099 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 28 01:45:09.335195 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 28 01:45:09.399467 (systemd)[1713]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:45:09.438879 systemd-logind[1586]: New session 2 of user core. Jan 28 01:45:09.881123 systemd[1713]: Queued start job for default target default.target. Jan 28 01:45:09.904936 systemd[1713]: Created slice app.slice - User Application Slice. Jan 28 01:45:09.905027 systemd[1713]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 28 01:45:09.905047 systemd[1713]: Reached target paths.target - Paths. Jan 28 01:45:09.905140 systemd[1713]: Reached target timers.target - Timers. Jan 28 01:45:09.916627 systemd[1713]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 28 01:45:09.923157 systemd[1713]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 28 01:45:10.048619 systemd[1713]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 28 01:45:10.048905 systemd[1713]: Reached target sockets.target - Sockets. Jan 28 01:45:10.054492 systemd[1713]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 28 01:45:10.072386 systemd[1713]: Reached target basic.target - Basic System. Jan 28 01:45:10.072564 systemd[1713]: Reached target default.target - Main User Target. Jan 28 01:45:10.072630 systemd[1713]: Startup finished in 618ms. Jan 28 01:45:10.074055 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 28 01:45:10.111554 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 28 01:45:10.212449 systemd[1]: Started sshd@1-10.0.0.85:22-10.0.0.1:41210.service - OpenSSH per-connection server daemon (10.0.0.1:41210). Jan 28 01:45:10.643296 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 41210 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:45:10.651348 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:45:10.705798 systemd-logind[1586]: New session 3 of user core. Jan 28 01:45:10.723041 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 28 01:45:10.858318 sshd[1732]: Connection closed by 10.0.0.1 port 41210 Jan 28 01:45:10.860033 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Jan 28 01:45:10.901981 systemd[1]: sshd@1-10.0.0.85:22-10.0.0.1:41210.service: Deactivated successfully. Jan 28 01:45:10.908138 systemd[1]: session-3.scope: Deactivated successfully. Jan 28 01:45:10.914877 systemd-logind[1586]: Session 3 logged out. Waiting for processes to exit. Jan 28 01:45:10.940112 systemd[1]: Started sshd@2-10.0.0.85:22-10.0.0.1:41216.service - OpenSSH per-connection server daemon (10.0.0.1:41216). Jan 28 01:45:10.945205 systemd-logind[1586]: Removed session 3. Jan 28 01:45:11.118052 kubelet[1700]: E0128 01:45:11.117041 1700 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:45:11.160640 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:45:11.161101 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:45:11.161934 systemd[1]: kubelet.service: Consumed 1.747s CPU time, 270.5M memory peak. Jan 28 01:45:11.247469 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 41216 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:45:11.253923 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:45:11.343570 systemd-logind[1586]: New session 4 of user core. Jan 28 01:45:11.351005 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 28 01:45:11.435218 sshd[1744]: Connection closed by 10.0.0.1 port 41216 Jan 28 01:45:11.435936 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Jan 28 01:45:11.469583 systemd[1]: Started sshd@3-10.0.0.85:22-10.0.0.1:41218.service - OpenSSH per-connection server daemon (10.0.0.1:41218). Jan 28 01:45:11.475436 systemd[1]: sshd@2-10.0.0.85:22-10.0.0.1:41216.service: Deactivated successfully. Jan 28 01:45:11.480371 systemd[1]: session-4.scope: Deactivated successfully. Jan 28 01:45:11.497099 systemd-logind[1586]: Session 4 logged out. Waiting for processes to exit. Jan 28 01:45:11.502092 systemd-logind[1586]: Removed session 4. Jan 28 01:45:11.673468 sshd[1747]: Accepted publickey for core from 10.0.0.1 port 41218 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:45:11.677480 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:45:11.731556 systemd-logind[1586]: New session 5 of user core. Jan 28 01:45:11.763154 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 28 01:45:11.884914 sshd[1754]: Connection closed by 10.0.0.1 port 41218 Jan 28 01:45:11.887970 sshd-session[1747]: pam_unix(sshd:session): session closed for user core Jan 28 01:45:11.917571 systemd[1]: sshd@3-10.0.0.85:22-10.0.0.1:41218.service: Deactivated successfully. Jan 28 01:45:11.924564 systemd[1]: session-5.scope: Deactivated successfully. Jan 28 01:45:11.948835 systemd-logind[1586]: Session 5 logged out. Waiting for processes to exit. Jan 28 01:45:11.957455 systemd[1]: Started sshd@4-10.0.0.85:22-10.0.0.1:41226.service - OpenSSH per-connection server daemon (10.0.0.1:41226). Jan 28 01:45:11.961939 systemd-logind[1586]: Removed session 5. Jan 28 01:45:12.232273 sshd[1760]: Accepted publickey for core from 10.0.0.1 port 41226 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:45:12.242270 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:45:12.289499 systemd-logind[1586]: New session 6 of user core. Jan 28 01:45:12.301254 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 28 01:45:12.463929 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 28 01:45:12.464642 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:45:12.527031 sudo[1765]: pam_unix(sudo:session): session closed for user root Jan 28 01:45:12.551296 sshd[1764]: Connection closed by 10.0.0.1 port 41226 Jan 28 01:45:12.548860 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Jan 28 01:45:12.610614 systemd[1]: sshd@4-10.0.0.85:22-10.0.0.1:41226.service: Deactivated successfully. Jan 28 01:45:12.613891 systemd[1]: session-6.scope: Deactivated successfully. Jan 28 01:45:12.643184 systemd-logind[1586]: Session 6 logged out. Waiting for processes to exit. Jan 28 01:45:12.657577 systemd[1]: Started sshd@5-10.0.0.85:22-10.0.0.1:42782.service - OpenSSH per-connection server daemon (10.0.0.1:42782). Jan 28 01:45:12.660997 systemd-logind[1586]: Removed session 6. Jan 28 01:45:12.928136 sshd[1772]: Accepted publickey for core from 10.0.0.1 port 42782 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:45:12.934184 sshd-session[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:45:12.985539 systemd-logind[1586]: New session 7 of user core. Jan 28 01:45:13.007377 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 28 01:45:13.208052 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 28 01:45:13.211077 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:45:13.263074 sudo[1778]: pam_unix(sudo:session): session closed for user root Jan 28 01:45:13.367257 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 28 01:45:13.373943 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:45:13.439896 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 28 01:45:13.749071 kernel: kauditd_printk_skb: 75 callbacks suppressed Jan 28 01:45:13.749347 kernel: audit: type=1305 audit(1769564713.721:226): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 28 01:45:13.721000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 28 01:45:13.749506 augenrules[1802]: No rules Jan 28 01:45:13.725309 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 01:45:13.741495 sudo[1777]: pam_unix(sudo:session): session closed for user root Jan 28 01:45:13.725928 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 28 01:45:13.721000 audit[1802]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc3f74dc40 a2=420 a3=0 items=0 ppid=1783 pid=1802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:13.766922 sshd[1776]: Connection closed by 10.0.0.1 port 42782 Jan 28 01:45:13.768310 sshd-session[1772]: pam_unix(sshd:session): session closed for user core Jan 28 01:45:13.817875 kernel: audit: type=1300 audit(1769564713.721:226): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc3f74dc40 a2=420 a3=0 items=0 ppid=1783 pid=1802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:13.818025 kernel: audit: type=1327 audit(1769564713.721:226): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 01:45:13.721000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 01:45:13.845618 kernel: audit: type=1130 audit(1769564713.725:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:45:13.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:45:13.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:45:13.883302 kernel: audit: type=1131 audit(1769564713.725:228): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:45:13.911918 kernel: audit: type=1106 audit(1769564713.740:229): pid=1777 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:45:13.740000 audit[1777]: USER_END pid=1777 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:45:13.916135 systemd[1]: sshd@5-10.0.0.85:22-10.0.0.1:42782.service: Deactivated successfully. Jan 28 01:45:13.923271 systemd[1]: session-7.scope: Deactivated successfully. Jan 28 01:45:13.936526 systemd-logind[1586]: Session 7 logged out. Waiting for processes to exit. Jan 28 01:45:13.948654 systemd[1]: Started sshd@6-10.0.0.85:22-10.0.0.1:42794.service - OpenSSH per-connection server daemon (10.0.0.1:42794). Jan 28 01:45:13.965864 systemd-logind[1586]: Removed session 7. Jan 28 01:45:13.979492 kernel: audit: type=1104 audit(1769564713.741:230): pid=1777 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:45:13.741000 audit[1777]: CRED_DISP pid=1777 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:45:13.761000 audit[1772]: USER_END pid=1772 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:45:13.761000 audit[1772]: CRED_DISP pid=1772 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:45:14.104565 kernel: audit: type=1106 audit(1769564713.761:231): pid=1772 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:45:14.104779 kernel: audit: type=1104 audit(1769564713.761:232): pid=1772 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:45:14.104822 kernel: audit: type=1131 audit(1769564713.916:233): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.85:22-10.0.0.1:42782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:45:13.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.85:22-10.0.0.1:42782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:45:13.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.85:22-10.0.0.1:42794 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:45:14.272302 sshd[1811]: Accepted publickey for core from 10.0.0.1 port 42794 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:45:14.268000 audit[1811]: USER_ACCT pid=1811 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:45:14.279000 audit[1811]: CRED_ACQ pid=1811 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:45:14.279000 audit[1811]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc8368c310 a2=3 a3=0 items=0 ppid=1 pid=1811 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:14.279000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:45:14.282448 sshd-session[1811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:45:14.351513 systemd-logind[1586]: New session 8 of user core. Jan 28 01:45:14.371505 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 28 01:45:14.396000 audit[1811]: USER_START pid=1811 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:45:14.412000 audit[1815]: CRED_ACQ pid=1815 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:45:14.538000 audit[1816]: USER_ACCT pid=1816 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:45:14.549000 audit[1816]: CRED_REFR pid=1816 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:45:14.549000 audit[1816]: USER_START pid=1816 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:45:14.551219 sudo[1816]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 28 01:45:14.552017 sudo[1816]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:45:19.692809 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 28 01:45:19.849327 (dockerd)[1838]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 28 01:45:21.229441 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 28 01:45:21.350521 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:45:22.985968 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1565477099 wd_nsec: 1565476997 Jan 28 01:45:23.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:45:23.199473 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:45:23.206486 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 28 01:45:23.206576 kernel: audit: type=1130 audit(1769564723.199:243): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:45:23.272048 (kubelet)[1852]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:45:24.675608 kubelet[1852]: E0128 01:45:24.674552 1852 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:45:24.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:45:24.705201 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:45:24.705636 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:45:24.707010 systemd[1]: kubelet.service: Consumed 1.523s CPU time, 110.9M memory peak. Jan 28 01:45:24.761517 kernel: audit: type=1131 audit(1769564724.703:244): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:45:25.915913 dockerd[1838]: time="2026-01-28T01:45:25.915507591Z" level=info msg="Starting up" Jan 28 01:45:25.925849 dockerd[1838]: time="2026-01-28T01:45:25.925459031Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 28 01:45:26.203766 dockerd[1838]: time="2026-01-28T01:45:26.201495888Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 28 01:45:26.743462 dockerd[1838]: time="2026-01-28T01:45:26.735932922Z" level=info msg="Loading containers: start." Jan 28 01:45:26.920370 kernel: Initializing XFRM netlink socket Jan 28 01:45:28.015000 audit[1907]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1907 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:45:28.015000 audit[1907]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe29c3f040 a2=0 a3=0 items=0 ppid=1838 pid=1907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:28.055103 kernel: audit: type=1325 audit(1769564728.015:245): table=nat:2 family=2 entries=2 op=nft_register_chain pid=1907 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:45:28.055294 kernel: audit: type=1300 audit(1769564728.015:245): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe29c3f040 a2=0 a3=0 items=0 ppid=1838 pid=1907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:28.055354 kernel: audit: type=1327 audit(1769564728.015:245): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 28 01:45:28.015000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 28 01:45:28.067345 kernel: audit: type=1325 audit(1769564728.039:246): table=filter:3 family=2 entries=2 op=nft_register_chain pid=1909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:45:28.039000 audit[1909]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:45:28.076641 kernel: audit: type=1300 audit(1769564728.039:246): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd8b513820 a2=0 a3=0 items=0 ppid=1838 pid=1909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:28.039000 audit[1909]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd8b513820 a2=0 a3=0 items=0 ppid=1838 pid=1909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:28.093450 kernel: audit: type=1327 audit(1769564728.039:246): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 28 01:45:28.039000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 28 01:45:28.082000 audit[1911]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1911 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:45:28.117198 kernel: audit: type=1325 audit(1769564728.082:247): table=filter:4 family=2 entries=1 op=nft_register_chain pid=1911 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:45:28.117328 kernel: audit: type=1300 audit(1769564728.082:247): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe91929ca0 a2=0 a3=0 items=0 ppid=1838 pid=1911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:28.082000 audit[1911]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe91929ca0 a2=0 a3=0 items=0 ppid=1838 pid=1911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:28.082000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 28 01:45:28.109000 audit[1913]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1913 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:45:28.109000 audit[1913]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc6f134a80 a2=0 a3=0 items=0 ppid=1838 pid=1913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:28.109000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 28 01:45:28.124000 audit[1915]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1915 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:45:28.124000 audit[1915]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdf4be9020 a2=0 a3=0 items=0 ppid=1838 pid=1915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:28.124000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 28 01:45:28.137000 audit[1917]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1917 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:45:28.137000 audit[1917]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff267d9980 a2=0 a3=0 items=0 ppid=1838 pid=1917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:28.137000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 01:45:28.150000 audit[1919]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1919 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:45:28.150000 audit[1919]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffebcb1f1c0 a2=0 a3=0 items=0 ppid=1838 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:28.150000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 28 01:45:28.257000 audit[1921]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1921 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:45:28.537775 kernel: kauditd_printk_skb: 13 callbacks suppressed Jan 28 01:45:28.537923 kernel: audit: type=1325 audit(1769564728.257:252): table=nat:9 family=2 entries=2 op=nft_register_chain pid=1921 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:45:28.257000 audit[1921]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffdb2356710 a2=0 a3=0 items=0 ppid=1838 pid=1921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:28.567390 kernel: audit: type=1300 audit(1769564728.257:252): arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffdb2356710 a2=0 a3=0 items=0 ppid=1838 pid=1921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:28.567489 kernel: audit: type=1327 audit(1769564728.257:252): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 28 01:45:28.257000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 28 01:45:28.583482 kernel: audit: type=1325 audit(1769564728.548:253): table=nat:10 family=2 entries=2 op=nft_register_chain pid=1924 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:45:28.548000 audit[1924]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1924 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:45:28.548000 audit[1924]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffc4e204db0 a2=0 a3=0 items=0 ppid=1838 pid=1924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:28.615249 kernel: audit: type=1300 audit(1769564728.548:253): arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffc4e204db0 a2=0 a3=0 items=0 ppid=1838 pid=1924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:28.662134 kernel: audit: type=1327 audit(1769564728.548:253): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 28 01:45:28.662379 kernel: audit: type=1325 audit(1769564728.576:254): table=filter:11 family=2 entries=2 op=nft_register_chain pid=1926 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:45:28.662444 kernel: audit: type=1300 audit(1769564728.576:254): arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc3cb61760 a2=0 a3=0 items=0 ppid=1838 pid=1926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:28.548000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 28 01:45:28.576000 audit[1926]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1926 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:45:28.576000 audit[1926]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc3cb61760 a2=0 a3=0 items=0 ppid=1838 pid=1926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:28.689560 kernel: audit: type=1327 audit(1769564728.576:254): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 28 01:45:28.576000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 28 01:45:28.594000 audit[1928]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1928 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:45:28.594000 audit[1928]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffcfcd66100 a2=0 a3=0 items=0 ppid=1838 pid=1928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:28.594000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 28 01:45:28.674000 audit[1930]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1930 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:45:28.674000 audit[1930]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffdd89220e0 a2=0 a3=0 items=0 ppid=1838 pid=1930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:28.674000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 01:45:28.689000 audit[1932]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1932 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:45:28.689000 audit[1932]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffec581d710 a2=0 a3=0 items=0 ppid=1838 pid=1932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:28.699980 kernel: audit: type=1325 audit(1769564728.594:255): table=filter:12 family=2 entries=1 op=nft_register_rule pid=1928 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:45:28.689000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 28 01:45:28.931000 audit[1962]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1962 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:45:28.931000 audit[1962]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff44a030c0 a2=0 a3=0 items=0 ppid=1838 pid=1962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:28.931000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 28 01:45:28.977000 audit[1964]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1964 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:45:28.977000 audit[1964]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd455c2740 a2=0 a3=0 items=0 ppid=1838 pid=1964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:28.977000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 28 01:45:29.040000 audit[1966]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1966 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:45:29.040000 audit[1966]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff11d99520 a2=0 a3=0 items=0 ppid=1838 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:29.040000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 28 01:45:29.075000 audit[1968]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1968 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:45:29.075000 audit[1968]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc6d8da5a0 a2=0 a3=0 items=0 ppid=1838 pid=1968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:29.075000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 28 01:45:29.105000 audit[1970]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1970 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:45:29.105000 audit[1970]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc1b96fbe0 a2=0 a3=0 items=0 ppid=1838 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:29.105000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 28 01:45:29.125000 audit[1972]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1972 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:45:29.125000 audit[1972]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc7d53d290 a2=0 a3=0 items=0 ppid=1838 pid=1972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:29.125000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 01:45:29.137000 audit[1974]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1974 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:45:29.137000 audit[1974]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc3f7bc5d0 a2=0 a3=0 items=0 ppid=1838 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:29.137000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 28 01:45:29.148000 audit[1976]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1976 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:45:29.148000 audit[1976]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffd4f6e9bd0 a2=0 a3=0 items=0 ppid=1838 pid=1976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:29.148000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 28 01:45:29.161000 audit[1978]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1978 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:45:29.161000 audit[1978]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffff00efc40 a2=0 a3=0 items=0 ppid=1838 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:29.161000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 28 01:45:29.172000 audit[1980]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1980 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:45:29.172000 audit[1980]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffebace6fc0 a2=0 a3=0 items=0 ppid=1838 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:29.172000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 28 01:45:29.217000 audit[1982]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1982 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:45:29.217000 audit[1982]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffedf86edd0 a2=0 a3=0 items=0 ppid=1838 pid=1982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:29.217000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 28 01:45:29.251000 audit[1984]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1984 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:45:29.251000 audit[1984]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd07e7e010 a2=0 a3=0 items=0 ppid=1838 pid=1984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:29.251000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 01:45:29.256000 audit[1986]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1986 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:45:29.256000 audit[1986]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffc297084c0 a2=0 a3=0 items=0 ppid=1838 pid=1986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:29.256000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 28 01:45:29.282000 audit[1991]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1991 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:45:29.282000 audit[1991]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffeaa4247a0 a2=0 a3=0 items=0 ppid=1838 pid=1991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:29.282000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 28 01:45:29.290000 audit[1993]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1993 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:45:29.290000 audit[1993]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd35060210 a2=0 a3=0 items=0 ppid=1838 pid=1993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:29.290000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 28 01:45:29.297000 audit[1995]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1995 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:45:29.297000 audit[1995]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffcb31ea000 a2=0 a3=0 items=0 ppid=1838 pid=1995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:29.297000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 28 01:45:29.311000 audit[1997]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=1997 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:45:29.311000 audit[1997]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff27736e00 a2=0 a3=0 items=0 ppid=1838 pid=1997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:29.311000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 28 01:45:29.321000 audit[1999]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=1999 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:45:29.321000 audit[1999]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc991290b0 a2=0 a3=0 items=0 ppid=1838 pid=1999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:29.321000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 28 01:45:29.327000 audit[2001]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2001 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:45:29.327000 audit[2001]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc68ead5e0 a2=0 a3=0 items=0 ppid=1838 pid=2001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:29.327000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 28 01:45:29.452000 audit[2006]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2006 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:45:29.452000 audit[2006]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffd6c3c02e0 a2=0 a3=0 items=0 ppid=1838 pid=2006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:29.452000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 28 01:45:29.467000 audit[2008]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2008 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:45:29.467000 audit[2008]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff849c4170 a2=0 a3=0 items=0 ppid=1838 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:29.467000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 28 01:45:29.506000 audit[2016]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2016 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:45:29.506000 audit[2016]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffcbc2026c0 a2=0 a3=0 items=0 ppid=1838 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:29.506000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 28 01:45:29.697000 audit[2022]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2022 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:45:29.697000 audit[2022]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffdad4372d0 a2=0 a3=0 items=0 ppid=1838 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:29.697000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 28 01:45:29.734000 audit[2024]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:45:29.734000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffef47d7bb0 a2=0 a3=0 items=0 ppid=1838 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:29.734000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 28 01:45:29.768000 audit[2026]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:45:29.768000 audit[2026]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff14dd4010 a2=0 a3=0 items=0 ppid=1838 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:29.768000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 28 01:45:29.779000 audit[2028]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:45:29.779000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffc7a2435c0 a2=0 a3=0 items=0 ppid=1838 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:29.779000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 28 01:45:29.788000 audit[2030]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:45:29.788000 audit[2030]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd3e7dd1e0 a2=0 a3=0 items=0 ppid=1838 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:45:29.788000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 28 01:45:29.797473 systemd-networkd[1515]: docker0: Link UP Jan 28 01:45:29.817146 dockerd[1838]: time="2026-01-28T01:45:29.812127585Z" level=info msg="Loading containers: done." Jan 28 01:45:30.125959 dockerd[1838]: time="2026-01-28T01:45:30.124571143Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 28 01:45:30.125959 dockerd[1838]: time="2026-01-28T01:45:30.124896456Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 28 01:45:30.130054 dockerd[1838]: time="2026-01-28T01:45:30.128428033Z" level=info msg="Initializing buildkit" Jan 28 01:45:31.133152 dockerd[1838]: time="2026-01-28T01:45:31.131816622Z" level=info msg="Completed buildkit initialization" Jan 28 01:45:31.191143 dockerd[1838]: time="2026-01-28T01:45:31.190857306Z" level=info msg="Daemon has completed initialization" Jan 28 01:45:31.198906 dockerd[1838]: time="2026-01-28T01:45:31.198042724Z" level=info msg="API listen on /run/docker.sock" Jan 28 01:45:31.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:45:31.200559 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 28 01:45:34.740413 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 28 01:45:34.876443 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:45:37.167862 kernel: kauditd_printk_skb: 90 callbacks suppressed Jan 28 01:45:37.168080 kernel: audit: type=1130 audit(1769564737.146:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:45:37.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:45:37.150431 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:45:37.197506 (kubelet)[2077]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:45:37.508417 containerd[1609]: time="2026-01-28T01:45:37.506443467Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 28 01:45:37.594240 kubelet[2077]: E0128 01:45:37.593592 2077 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:45:37.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:45:37.605283 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:45:37.605933 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:45:37.607578 systemd[1]: kubelet.service: Consumed 817ms CPU time, 109.3M memory peak. Jan 28 01:45:37.628783 kernel: audit: type=1131 audit(1769564737.605:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:45:41.244739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1107327536.mount: Deactivated successfully. Jan 28 01:45:45.847258 update_engine[1589]: I20260128 01:45:45.839588 1589 update_attempter.cc:509] Updating boot flags... Jan 28 01:45:47.756616 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 28 01:45:47.817288 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:45:49.073012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:45:49.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:45:49.149030 kernel: audit: type=1130 audit(1769564749.072:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:45:49.174193 (kubelet)[2172]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:45:49.671985 kubelet[2172]: E0128 01:45:49.670820 2172 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:45:49.678079 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:45:49.678454 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:45:49.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:45:49.679952 systemd[1]: kubelet.service: Consumed 792ms CPU time, 111.1M memory peak. Jan 28 01:45:49.703711 kernel: audit: type=1131 audit(1769564749.678:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:45:56.826316 containerd[1609]: time="2026-01-28T01:45:56.823027132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:45:56.851242 containerd[1609]: time="2026-01-28T01:45:56.831733264Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30106714" Jan 28 01:45:56.851242 containerd[1609]: time="2026-01-28T01:45:56.839110652Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:45:56.874776 containerd[1609]: time="2026-01-28T01:45:56.874214335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:45:56.889920 containerd[1609]: time="2026-01-28T01:45:56.883966301Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 19.377342093s" Jan 28 01:45:56.889920 containerd[1609]: time="2026-01-28T01:45:56.885159221Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 28 01:45:56.910947 containerd[1609]: time="2026-01-28T01:45:56.910336300Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 28 01:45:59.721448 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 28 01:45:59.736112 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:46:00.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:46:00.244917 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:46:00.293288 kernel: audit: type=1130 audit(1769564760.243:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:46:00.300639 (kubelet)[2192]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:46:00.701157 kubelet[2192]: E0128 01:46:00.698843 2192 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:46:00.713223 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:46:00.713572 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:46:00.722064 systemd[1]: kubelet.service: Consumed 403ms CPU time, 110.6M memory peak. Jan 28 01:46:00.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:46:00.755289 kernel: audit: type=1131 audit(1769564760.721:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:46:11.026534 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 28 01:46:11.094634 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:46:14.106519 containerd[1609]: time="2026-01-28T01:46:14.101974455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:46:14.112625 containerd[1609]: time="2026-01-28T01:46:14.112565451Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Jan 28 01:46:14.118613 containerd[1609]: time="2026-01-28T01:46:14.116987820Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:46:14.138884 containerd[1609]: time="2026-01-28T01:46:14.136656752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:46:14.153632 containerd[1609]: time="2026-01-28T01:46:14.153575782Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 17.243137398s" Jan 28 01:46:14.154099 containerd[1609]: time="2026-01-28T01:46:14.154001438Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 28 01:46:14.187995 containerd[1609]: time="2026-01-28T01:46:14.182874691Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 28 01:46:14.239916 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:46:14.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:46:14.357083 kernel: audit: type=1130 audit(1769564774.246:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:46:14.370788 (kubelet)[2209]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:46:14.742764 kubelet[2209]: E0128 01:46:14.742059 2209 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:46:14.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:46:14.759287 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:46:14.759651 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:46:14.760484 systemd[1]: kubelet.service: Consumed 970ms CPU time, 110.5M memory peak. Jan 28 01:46:14.799890 kernel: audit: type=1131 audit(1769564774.759:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:46:24.268252 containerd[1609]: time="2026-01-28T01:46:24.267611193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:46:24.271409 containerd[1609]: time="2026-01-28T01:46:24.269507303Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20152717" Jan 28 01:46:24.274453 containerd[1609]: time="2026-01-28T01:46:24.273950135Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:46:24.296390 containerd[1609]: time="2026-01-28T01:46:24.296285726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:46:24.310081 containerd[1609]: time="2026-01-28T01:46:24.308917595Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 10.125986994s" Jan 28 01:46:24.310081 containerd[1609]: time="2026-01-28T01:46:24.309086570Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 28 01:46:24.324097 containerd[1609]: time="2026-01-28T01:46:24.323588307Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 28 01:46:25.016564 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 28 01:46:25.051823 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:46:27.034658 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:46:27.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:46:27.059763 kernel: audit: type=1130 audit(1769564787.034:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:46:27.079765 (kubelet)[2234]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:46:28.587606 kubelet[2234]: E0128 01:46:28.587235 2234 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:46:28.599791 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:46:28.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:46:28.600165 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:46:28.601092 systemd[1]: kubelet.service: Consumed 1.155s CPU time, 110.4M memory peak. Jan 28 01:46:28.629788 kernel: audit: type=1131 audit(1769564788.599:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:46:31.546986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount41296261.mount: Deactivated successfully. Jan 28 01:46:38.720909 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 28 01:46:38.735497 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:46:40.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:46:40.707251 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:46:40.764754 kernel: audit: type=1130 audit(1769564800.705:296): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:46:40.785772 (kubelet)[2254]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:46:41.670996 kubelet[2254]: E0128 01:46:41.664514 2254 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:46:41.989312 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:46:42.058471 kernel: audit: type=1131 audit(1769564801.992:297): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:46:41.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:46:41.994783 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:46:41.998894 systemd[1]: kubelet.service: Consumed 798ms CPU time, 108.8M memory peak. Jan 28 01:46:44.734327 containerd[1609]: time="2026-01-28T01:46:44.733070466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:46:44.746517 containerd[1609]: time="2026-01-28T01:46:44.743126771Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31926374" Jan 28 01:46:44.755359 containerd[1609]: time="2026-01-28T01:46:44.753323613Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:46:44.760374 containerd[1609]: time="2026-01-28T01:46:44.758127324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:46:44.760374 containerd[1609]: time="2026-01-28T01:46:44.759227229Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 20.435584678s" Jan 28 01:46:44.760374 containerd[1609]: time="2026-01-28T01:46:44.759322059Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 28 01:46:44.774557 containerd[1609]: time="2026-01-28T01:46:44.773317277Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 28 01:46:47.697983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount588406068.mount: Deactivated successfully. Jan 28 01:46:52.581627 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 28 01:46:52.601443 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:46:54.187895 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:46:54.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:46:54.261456 kernel: audit: type=1130 audit(1769564814.192:298): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:46:54.400942 (kubelet)[2322]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:46:55.236884 kubelet[2322]: E0128 01:46:55.236524 2322 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:46:55.260783 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:46:55.263451 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:46:55.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:46:55.269831 systemd[1]: kubelet.service: Consumed 817ms CPU time, 110.7M memory peak. Jan 28 01:46:55.320268 kernel: audit: type=1131 audit(1769564815.267:299): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:47:05.384533 containerd[1609]: time="2026-01-28T01:47:05.381046783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:47:05.405139 containerd[1609]: time="2026-01-28T01:47:05.402623155Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20931115" Jan 28 01:47:05.424774 containerd[1609]: time="2026-01-28T01:47:05.424228731Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:47:05.440648 containerd[1609]: time="2026-01-28T01:47:05.438189624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:47:05.492230 containerd[1609]: time="2026-01-28T01:47:05.483347207Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 20.709891147s" Jan 28 01:47:05.525500 containerd[1609]: time="2026-01-28T01:47:05.492283860Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 28 01:47:05.670272 containerd[1609]: time="2026-01-28T01:47:05.663277877Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 28 01:47:05.669284 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 28 01:47:05.766501 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:47:08.265478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1833431192.mount: Deactivated successfully. Jan 28 01:47:08.324002 containerd[1609]: time="2026-01-28T01:47:08.321919483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:47:08.338622 containerd[1609]: time="2026-01-28T01:47:08.338263001Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=881" Jan 28 01:47:08.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:47:08.367068 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:47:08.382824 containerd[1609]: time="2026-01-28T01:47:08.382616598Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:47:08.401764 containerd[1609]: time="2026-01-28T01:47:08.399824475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:47:08.402225 containerd[1609]: time="2026-01-28T01:47:08.402190354Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.738637201s" Jan 28 01:47:08.402362 containerd[1609]: time="2026-01-28T01:47:08.402336776Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 28 01:47:08.413818 kernel: audit: type=1130 audit(1769564828.366:300): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:47:08.419302 containerd[1609]: time="2026-01-28T01:47:08.419137587Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 28 01:47:08.430494 (kubelet)[2343]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:47:09.011599 kubelet[2343]: E0128 01:47:09.011297 2343 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:47:09.021573 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:47:09.022135 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:47:09.025315 systemd[1]: kubelet.service: Consumed 1.103s CPU time, 108M memory peak. Jan 28 01:47:09.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:47:09.073074 kernel: audit: type=1131 audit(1769564829.024:301): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:47:11.510929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2685897979.mount: Deactivated successfully. Jan 28 01:47:19.232584 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 28 01:47:19.306604 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:47:21.574054 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:47:21.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:47:21.623548 kernel: audit: type=1130 audit(1769564841.578:302): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:47:21.712550 (kubelet)[2414]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:47:22.835200 kubelet[2414]: E0128 01:47:22.792126 2414 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:47:22.915192 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:47:22.915610 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:47:22.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:47:22.923986 systemd[1]: kubelet.service: Consumed 1.905s CPU time, 109.1M memory peak. Jan 28 01:47:22.953529 kernel: audit: type=1131 audit(1769564842.921:303): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:47:32.976524 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 28 01:47:33.592561 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:47:37.804169 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:47:37.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:47:37.841005 kernel: audit: type=1130 audit(1769564857.805:304): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:47:37.848415 (kubelet)[2431]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:47:38.477150 kubelet[2431]: E0128 01:47:38.476946 2431 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:47:38.489858 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:47:38.490420 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:47:38.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:47:38.493136 systemd[1]: kubelet.service: Consumed 1.213s CPU time, 110.3M memory peak. Jan 28 01:47:38.505539 kernel: audit: type=1131 audit(1769564858.492:305): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:47:40.198388 containerd[1609]: time="2026-01-28T01:47:40.197126177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:47:40.204112 containerd[1609]: time="2026-01-28T01:47:40.204053502Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58916088" Jan 28 01:47:40.206903 containerd[1609]: time="2026-01-28T01:47:40.206781101Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:47:40.221494 containerd[1609]: time="2026-01-28T01:47:40.221171185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:47:40.267903 containerd[1609]: time="2026-01-28T01:47:40.263929028Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 31.844412285s" Jan 28 01:47:40.284662 containerd[1609]: time="2026-01-28T01:47:40.270759909Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 28 01:47:48.725897 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 28 01:47:48.736790 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:47:50.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:47:50.338631 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:47:50.381821 kernel: audit: type=1130 audit(1769564870.337:306): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:47:50.388479 (kubelet)[2475]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:47:50.871028 kubelet[2475]: E0128 01:47:50.869808 2475 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:47:50.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:47:50.933130 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:47:50.933819 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:47:50.937948 systemd[1]: kubelet.service: Consumed 803ms CPU time, 108.9M memory peak. Jan 28 01:47:50.948827 kernel: audit: type=1131 audit(1769564870.937:307): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:47:51.621650 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:47:51.621981 systemd[1]: kubelet.service: Consumed 803ms CPU time, 108.9M memory peak. Jan 28 01:47:51.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:47:51.647977 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:47:51.678770 kernel: audit: type=1130 audit(1769564871.619:308): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:47:51.678923 kernel: audit: type=1131 audit(1769564871.619:309): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:47:51.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:47:52.058762 systemd[1]: Reload requested from client PID 2491 ('systemctl') (unit session-8.scope)... Jan 28 01:47:52.058862 systemd[1]: Reloading... Jan 28 01:47:52.754022 zram_generator::config[2533]: No configuration found. Jan 28 01:47:53.748098 systemd[1]: Reloading finished in 1587 ms. Jan 28 01:47:53.896000 audit: BPF prog-id=63 op=LOAD Jan 28 01:47:53.907183 kernel: audit: type=1334 audit(1769564873.896:310): prog-id=63 op=LOAD Jan 28 01:47:53.908412 kernel: audit: type=1334 audit(1769564873.896:311): prog-id=64 op=LOAD Jan 28 01:47:53.896000 audit: BPF prog-id=64 op=LOAD Jan 28 01:47:53.913806 kernel: audit: type=1334 audit(1769564873.896:312): prog-id=50 op=UNLOAD Jan 28 01:47:53.896000 audit: BPF prog-id=50 op=UNLOAD Jan 28 01:47:53.896000 audit: BPF prog-id=51 op=UNLOAD Jan 28 01:47:53.927887 kernel: audit: type=1334 audit(1769564873.896:313): prog-id=51 op=UNLOAD Jan 28 01:47:53.927977 kernel: audit: type=1334 audit(1769564873.900:314): prog-id=65 op=LOAD Jan 28 01:47:53.900000 audit: BPF prog-id=65 op=LOAD Jan 28 01:47:53.900000 audit: BPF prog-id=49 op=UNLOAD Jan 28 01:47:53.959785 kernel: audit: type=1334 audit(1769564873.900:315): prog-id=49 op=UNLOAD Jan 28 01:47:53.901000 audit: BPF prog-id=66 op=LOAD Jan 28 01:47:53.901000 audit: BPF prog-id=46 op=UNLOAD Jan 28 01:47:53.901000 audit: BPF prog-id=67 op=LOAD Jan 28 01:47:53.901000 audit: BPF prog-id=68 op=LOAD Jan 28 01:47:53.901000 audit: BPF prog-id=47 op=UNLOAD Jan 28 01:47:53.901000 audit: BPF prog-id=48 op=UNLOAD Jan 28 01:47:53.908000 audit: BPF prog-id=69 op=LOAD Jan 28 01:47:53.908000 audit: BPF prog-id=52 op=UNLOAD Jan 28 01:47:53.909000 audit: BPF prog-id=70 op=LOAD Jan 28 01:47:53.909000 audit: BPF prog-id=71 op=LOAD Jan 28 01:47:53.909000 audit: BPF prog-id=53 op=UNLOAD Jan 28 01:47:53.909000 audit: BPF prog-id=54 op=UNLOAD Jan 28 01:47:54.017000 audit: BPF prog-id=72 op=LOAD Jan 28 01:47:54.017000 audit: BPF prog-id=43 op=UNLOAD Jan 28 01:47:54.018000 audit: BPF prog-id=73 op=LOAD Jan 28 01:47:54.018000 audit: BPF prog-id=74 op=LOAD Jan 28 01:47:54.018000 audit: BPF prog-id=44 op=UNLOAD Jan 28 01:47:54.018000 audit: BPF prog-id=45 op=UNLOAD Jan 28 01:47:54.024000 audit: BPF prog-id=75 op=LOAD Jan 28 01:47:54.026000 audit: BPF prog-id=55 op=UNLOAD Jan 28 01:47:54.027000 audit: BPF prog-id=76 op=LOAD Jan 28 01:47:54.043000 audit: BPF prog-id=77 op=LOAD Jan 28 01:47:54.051000 audit: BPF prog-id=56 op=UNLOAD Jan 28 01:47:54.053000 audit: BPF prog-id=57 op=UNLOAD Jan 28 01:47:54.086000 audit: BPF prog-id=78 op=LOAD Jan 28 01:47:54.086000 audit: BPF prog-id=58 op=UNLOAD Jan 28 01:47:54.096000 audit: BPF prog-id=79 op=LOAD Jan 28 01:47:54.096000 audit: BPF prog-id=59 op=UNLOAD Jan 28 01:47:54.100000 audit: BPF prog-id=80 op=LOAD Jan 28 01:47:54.106000 audit: BPF prog-id=60 op=UNLOAD Jan 28 01:47:54.109000 audit: BPF prog-id=81 op=LOAD Jan 28 01:47:54.110000 audit: BPF prog-id=82 op=LOAD Jan 28 01:47:54.110000 audit: BPF prog-id=61 op=UNLOAD Jan 28 01:47:54.110000 audit: BPF prog-id=62 op=UNLOAD Jan 28 01:47:54.200653 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 28 01:47:54.200978 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 28 01:47:54.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:47:54.201619 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:47:54.201781 systemd[1]: kubelet.service: Consumed 418ms CPU time, 98.5M memory peak. Jan 28 01:47:54.206733 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:47:55.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:47:55.370249 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:47:55.385030 kernel: kauditd_printk_skb: 35 callbacks suppressed Jan 28 01:47:55.385083 kernel: audit: type=1130 audit(1769564875.369:351): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:47:55.431394 (kubelet)[2584]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 01:47:55.764738 kubelet[2584]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:47:55.764738 kubelet[2584]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 01:47:55.764738 kubelet[2584]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:47:55.764738 kubelet[2584]: I0128 01:47:55.763546 2584 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 01:47:59.353518 kubelet[2584]: I0128 01:47:59.352883 2584 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 28 01:47:59.353518 kubelet[2584]: I0128 01:47:59.352967 2584 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 01:47:59.378979 kubelet[2584]: I0128 01:47:59.353963 2584 server.go:956] "Client rotation is on, will bootstrap in background" Jan 28 01:47:59.573736 kubelet[2584]: I0128 01:47:59.569865 2584 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 01:47:59.594582 kubelet[2584]: E0128 01:47:59.591531 2584 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.85:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 28 01:47:59.695834 kubelet[2584]: I0128 01:47:59.692924 2584 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 01:47:59.725074 kubelet[2584]: I0128 01:47:59.722632 2584 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 01:47:59.725074 kubelet[2584]: I0128 01:47:59.723436 2584 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 01:47:59.725074 kubelet[2584]: I0128 01:47:59.723545 2584 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 01:47:59.725074 kubelet[2584]: I0128 01:47:59.723907 2584 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 01:47:59.726024 kubelet[2584]: I0128 01:47:59.723923 2584 container_manager_linux.go:303] "Creating device plugin manager" Jan 28 01:47:59.726024 kubelet[2584]: I0128 01:47:59.724314 2584 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:47:59.749535 kubelet[2584]: I0128 01:47:59.746410 2584 kubelet.go:480] "Attempting to sync node with API server" Jan 28 01:47:59.749535 kubelet[2584]: I0128 01:47:59.746468 2584 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 01:47:59.749535 kubelet[2584]: I0128 01:47:59.746505 2584 kubelet.go:386] "Adding apiserver pod source" Jan 28 01:47:59.749535 kubelet[2584]: I0128 01:47:59.746921 2584 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 01:47:59.787641 kubelet[2584]: E0128 01:47:59.785416 2584 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 28 01:47:59.797426 kubelet[2584]: E0128 01:47:59.792332 2584 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 28 01:47:59.808566 kubelet[2584]: I0128 01:47:59.806339 2584 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 28 01:47:59.808566 kubelet[2584]: I0128 01:47:59.807599 2584 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 28 01:47:59.815357 kubelet[2584]: W0128 01:47:59.814938 2584 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 28 01:47:59.875908 kubelet[2584]: I0128 01:47:59.874946 2584 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 01:47:59.883761 kubelet[2584]: I0128 01:47:59.878238 2584 server.go:1289] "Started kubelet" Jan 28 01:47:59.883761 kubelet[2584]: I0128 01:47:59.880403 2584 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 01:47:59.883761 kubelet[2584]: I0128 01:47:59.882286 2584 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 01:47:59.963539 kubelet[2584]: I0128 01:47:59.961303 2584 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 01:47:59.963539 kubelet[2584]: I0128 01:47:59.962355 2584 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 01:47:59.980475 kubelet[2584]: I0128 01:47:59.978036 2584 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 01:47:59.985859 kubelet[2584]: I0128 01:47:59.985778 2584 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 01:47:59.990724 kubelet[2584]: E0128 01:47:59.987390 2584 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:47:59.990724 kubelet[2584]: I0128 01:47:59.988448 2584 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 01:47:59.990724 kubelet[2584]: I0128 01:47:59.988527 2584 reconciler.go:26] "Reconciler: start to sync state" Jan 28 01:47:59.990724 kubelet[2584]: I0128 01:47:59.988645 2584 server.go:317] "Adding debug handlers to kubelet server" Jan 28 01:47:59.990724 kubelet[2584]: E0128 01:47:59.990422 2584 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 28 01:47:59.990724 kubelet[2584]: E0128 01:47:59.990499 2584 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="200ms" Jan 28 01:48:00.005244 kubelet[2584]: I0128 01:48:00.005157 2584 factory.go:223] Registration of the systemd container factory successfully Jan 28 01:48:00.005400 kubelet[2584]: I0128 01:48:00.005362 2584 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 01:48:00.012406 kubelet[2584]: E0128 01:48:00.005823 2584 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.85:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.85:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188ec1e1f824e092 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 01:47:59.878152338 +0000 UTC m=+4.410266344,LastTimestamp:2026-01-28 01:47:59.878152338 +0000 UTC m=+4.410266344,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 01:48:00.020317 kubelet[2584]: I0128 01:48:00.017476 2584 factory.go:223] Registration of the containerd container factory successfully Jan 28 01:48:00.020317 kubelet[2584]: E0128 01:48:00.018156 2584 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 01:48:00.039000 audit[2603]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2603 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:48:00.039000 audit[2603]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffef5dc1eb0 a2=0 a3=0 items=0 ppid=2584 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:00.099456 kubelet[2584]: E0128 01:48:00.093796 2584 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:48:00.108343 kernel: audit: type=1325 audit(1769564880.039:352): table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2603 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:48:00.108468 kernel: audit: type=1300 audit(1769564880.039:352): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffef5dc1eb0 a2=0 a3=0 items=0 ppid=2584 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:00.039000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 28 01:48:00.153819 kernel: audit: type=1327 audit(1769564880.039:352): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 28 01:48:00.154825 kernel: audit: type=1325 audit(1769564880.055:353): table=filter:43 family=2 entries=1 op=nft_register_chain pid=2604 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:48:00.055000 audit[2604]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2604 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:48:00.055000 audit[2604]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff3bd6ceb0 a2=0 a3=0 items=0 ppid=2584 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:00.192220 kernel: audit: type=1300 audit(1769564880.055:353): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff3bd6ceb0 a2=0 a3=0 items=0 ppid=2584 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:00.192394 kubelet[2584]: I0128 01:48:00.189544 2584 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 01:48:00.192394 kubelet[2584]: I0128 01:48:00.189569 2584 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 01:48:00.192394 kubelet[2584]: I0128 01:48:00.189633 2584 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:48:00.192976 kubelet[2584]: E0128 01:48:00.192639 2584 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="400ms" Jan 28 01:48:00.055000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 28 01:48:00.213270 kernel: audit: type=1327 audit(1769564880.055:353): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 28 01:48:00.213355 kernel: audit: type=1325 audit(1769564880.076:354): table=filter:44 family=2 entries=2 op=nft_register_chain pid=2609 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:48:00.076000 audit[2609]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2609 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:48:00.213648 kubelet[2584]: E0128 01:48:00.212367 2584 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:48:00.076000 audit[2609]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffce5e96e40 a2=0 a3=0 items=0 ppid=2584 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:00.254905 kubelet[2584]: I0128 01:48:00.254873 2584 policy_none.go:49] "None policy: Start" Jan 28 01:48:00.255113 kubelet[2584]: I0128 01:48:00.255091 2584 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 01:48:00.255273 kubelet[2584]: I0128 01:48:00.255255 2584 state_mem.go:35] "Initializing new in-memory state store" Jan 28 01:48:00.255831 kernel: audit: type=1300 audit(1769564880.076:354): arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffce5e96e40 a2=0 a3=0 items=0 ppid=2584 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:00.076000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 01:48:00.256762 kernel: audit: type=1327 audit(1769564880.076:354): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 01:48:00.115000 audit[2612]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2612 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:48:00.115000 audit[2612]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffeacb05240 a2=0 a3=0 items=0 ppid=2584 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:00.115000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 01:48:00.279588 kubelet[2584]: I0128 01:48:00.279424 2584 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 28 01:48:00.277000 audit[2616]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2616 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:48:00.277000 audit[2616]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffdb7f44410 a2=0 a3=0 items=0 ppid=2584 pid=2616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:00.277000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 28 01:48:00.285490 kubelet[2584]: I0128 01:48:00.285447 2584 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 28 01:48:00.286143 kubelet[2584]: I0128 01:48:00.286119 2584 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 28 01:48:00.286458 kubelet[2584]: I0128 01:48:00.286433 2584 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 01:48:00.286557 kubelet[2584]: I0128 01:48:00.286540 2584 kubelet.go:2436] "Starting kubelet main sync loop" Jan 28 01:48:00.287085 kubelet[2584]: E0128 01:48:00.286920 2584 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 01:48:00.283000 audit[2617]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2617 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:48:00.283000 audit[2617]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd688c7630 a2=0 a3=0 items=0 ppid=2584 pid=2617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:00.283000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 28 01:48:00.290000 audit[2619]: NETFILTER_CFG table=mangle:48 family=10 entries=1 op=nft_register_chain pid=2619 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:48:00.290000 audit[2619]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffd5f60450 a2=0 a3=0 items=0 ppid=2584 pid=2619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:00.290000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 28 01:48:00.292000 audit[2618]: NETFILTER_CFG table=mangle:49 family=2 entries=1 op=nft_register_chain pid=2618 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:48:00.297629 kubelet[2584]: E0128 01:48:00.288227 2584 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 28 01:48:00.292000 audit[2618]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffefc170850 a2=0 a3=0 items=0 ppid=2584 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:00.292000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 28 01:48:00.298000 audit[2620]: NETFILTER_CFG table=nat:50 family=10 entries=1 op=nft_register_chain pid=2620 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:48:00.298000 audit[2620]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeedae5610 a2=0 a3=0 items=0 ppid=2584 pid=2620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:00.298000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 28 01:48:00.305000 audit[2621]: NETFILTER_CFG table=nat:51 family=2 entries=1 op=nft_register_chain pid=2621 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:48:00.305000 audit[2621]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffddfd113f0 a2=0 a3=0 items=0 ppid=2584 pid=2621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:00.305000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 28 01:48:00.319869 kubelet[2584]: E0128 01:48:00.319589 2584 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:48:00.313000 audit[2622]: NETFILTER_CFG table=filter:52 family=10 entries=1 op=nft_register_chain pid=2622 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:48:00.313000 audit[2622]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff4e7ae360 a2=0 a3=0 items=0 ppid=2584 pid=2622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:00.313000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 28 01:48:00.335000 audit[2623]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=2623 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:48:00.335000 audit[2623]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdbe55ec50 a2=0 a3=0 items=0 ppid=2584 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:00.335000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 28 01:48:00.340003 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 28 01:48:00.378292 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 28 01:48:00.388320 kubelet[2584]: E0128 01:48:00.388276 2584 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 01:48:00.411891 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 28 01:48:00.421003 kubelet[2584]: E0128 01:48:00.420254 2584 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:48:00.424269 kubelet[2584]: E0128 01:48:00.422907 2584 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 28 01:48:00.427766 kubelet[2584]: I0128 01:48:00.426851 2584 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 01:48:00.427766 kubelet[2584]: I0128 01:48:00.426951 2584 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 01:48:00.428256 kubelet[2584]: I0128 01:48:00.428234 2584 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 01:48:00.437592 kubelet[2584]: E0128 01:48:00.435499 2584 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 01:48:00.437592 kubelet[2584]: E0128 01:48:00.435556 2584 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 01:48:00.542482 kubelet[2584]: I0128 01:48:00.539165 2584 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:48:00.542482 kubelet[2584]: E0128 01:48:00.540433 2584 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Jan 28 01:48:00.595886 kubelet[2584]: E0128 01:48:00.593974 2584 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="800ms" Jan 28 01:48:00.726649 kubelet[2584]: I0128 01:48:00.726482 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:48:00.726649 kubelet[2584]: I0128 01:48:00.726574 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:48:00.728657 kubelet[2584]: I0128 01:48:00.726785 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:48:00.728657 kubelet[2584]: I0128 01:48:00.726885 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 28 01:48:00.728657 kubelet[2584]: I0128 01:48:00.726909 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b8d6d985bfe094caafe61d064e436e9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7b8d6d985bfe094caafe61d064e436e9\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:48:00.728657 kubelet[2584]: I0128 01:48:00.726933 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b8d6d985bfe094caafe61d064e436e9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7b8d6d985bfe094caafe61d064e436e9\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:48:00.728657 kubelet[2584]: I0128 01:48:00.727370 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:48:00.728947 kubelet[2584]: I0128 01:48:00.727587 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b8d6d985bfe094caafe61d064e436e9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7b8d6d985bfe094caafe61d064e436e9\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:48:00.728947 kubelet[2584]: I0128 01:48:00.727621 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:48:00.795261 kubelet[2584]: I0128 01:48:00.791061 2584 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:48:00.795261 kubelet[2584]: E0128 01:48:00.792117 2584 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Jan 28 01:48:00.871638 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Jan 28 01:48:00.888579 kubelet[2584]: E0128 01:48:00.888298 2584 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 28 01:48:00.914508 kubelet[2584]: E0128 01:48:00.913410 2584 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:48:00.914508 kubelet[2584]: E0128 01:48:00.914106 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:00.930004 containerd[1609]: time="2026-01-28T01:48:00.929951128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Jan 28 01:48:00.938994 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Jan 28 01:48:01.077353 systemd[1]: Created slice kubepods-burstable-pod7b8d6d985bfe094caafe61d064e436e9.slice - libcontainer container kubepods-burstable-pod7b8d6d985bfe094caafe61d064e436e9.slice. Jan 28 01:48:01.095335 kubelet[2584]: E0128 01:48:01.094906 2584 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:48:01.098309 kubelet[2584]: E0128 01:48:01.095593 2584 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:48:01.098309 kubelet[2584]: E0128 01:48:01.096493 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:01.098309 kubelet[2584]: E0128 01:48:01.097439 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:01.099918 containerd[1609]: time="2026-01-28T01:48:01.099103984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Jan 28 01:48:01.099918 containerd[1609]: time="2026-01-28T01:48:01.099138517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7b8d6d985bfe094caafe61d064e436e9,Namespace:kube-system,Attempt:0,}" Jan 28 01:48:01.108250 kubelet[2584]: E0128 01:48:01.106590 2584 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 28 01:48:01.112611 kubelet[2584]: E0128 01:48:01.111357 2584 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 28 01:48:01.210640 kubelet[2584]: I0128 01:48:01.210556 2584 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:48:01.220478 kubelet[2584]: E0128 01:48:01.211144 2584 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Jan 28 01:48:01.255246 kubelet[2584]: E0128 01:48:01.252870 2584 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 28 01:48:01.401021 kubelet[2584]: E0128 01:48:01.395838 2584 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="1.6s" Jan 28 01:48:01.434586 containerd[1609]: time="2026-01-28T01:48:01.434530155Z" level=info msg="connecting to shim caca198c12616520c9d93ddabe0acd10628c004609f8622a0c2830fe8a8b0689" address="unix:///run/containerd/s/0a14fb69ef0c1b68cba69839148f5642d0d507da47fa02f29bb022d315553e7f" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:48:01.721491 kubelet[2584]: E0128 01:48:01.721116 2584 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.85:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 28 01:48:01.722518 containerd[1609]: time="2026-01-28T01:48:01.721111825Z" level=info msg="connecting to shim c11ac79f860d05ec46ec4b4c9e6a1ed1b5ab811103337da6e739ad25cccf8ab1" address="unix:///run/containerd/s/d56c9b66ca2f904b2d4bcf074131ff81c6015c3e135b2fa044907746adacdb8e" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:48:01.741362 containerd[1609]: time="2026-01-28T01:48:01.740541170Z" level=info msg="connecting to shim cd5552c8c00760aa608d4a148126f9c2f58d42c8737025f8bd979a2bf6fdf17e" address="unix:///run/containerd/s/004f2f3be3ea3e03f58e057bbaa5bebeda8a7e17fab2acdd2a25a710ace625a7" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:48:02.031464 kubelet[2584]: I0128 01:48:02.025522 2584 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:48:02.062773 kubelet[2584]: E0128 01:48:02.033297 2584 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Jan 28 01:48:02.033424 systemd[1]: Started cri-containerd-caca198c12616520c9d93ddabe0acd10628c004609f8622a0c2830fe8a8b0689.scope - libcontainer container caca198c12616520c9d93ddabe0acd10628c004609f8622a0c2830fe8a8b0689. Jan 28 01:48:02.302989 systemd[1]: Started cri-containerd-c11ac79f860d05ec46ec4b4c9e6a1ed1b5ab811103337da6e739ad25cccf8ab1.scope - libcontainer container c11ac79f860d05ec46ec4b4c9e6a1ed1b5ab811103337da6e739ad25cccf8ab1. Jan 28 01:48:02.418000 audit: BPF prog-id=83 op=LOAD Jan 28 01:48:02.441486 kernel: kauditd_printk_skb: 27 callbacks suppressed Jan 28 01:48:02.441614 kernel: audit: type=1334 audit(1769564882.418:364): prog-id=83 op=LOAD Jan 28 01:48:02.484238 kernel: audit: type=1334 audit(1769564882.440:365): prog-id=84 op=LOAD Jan 28 01:48:02.484362 kernel: audit: type=1300 audit(1769564882.440:365): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2639 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:02.440000 audit: BPF prog-id=84 op=LOAD Jan 28 01:48:02.440000 audit[2669]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2639 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:02.495889 kernel: audit: type=1327 audit(1769564882.440:365): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361636131393863313236313635323063396439336464616265306163 Jan 28 01:48:02.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361636131393863313236313635323063396439336464616265306163 Jan 28 01:48:02.521088 kernel: audit: type=1334 audit(1769564882.440:366): prog-id=84 op=UNLOAD Jan 28 01:48:02.554396 kernel: audit: type=1300 audit(1769564882.440:366): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2639 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:02.440000 audit: BPF prog-id=84 op=UNLOAD Jan 28 01:48:02.440000 audit[2669]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2639 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:02.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361636131393863313236313635323063396439336464616265306163 Jan 28 01:48:02.444000 audit: BPF prog-id=85 op=LOAD Jan 28 01:48:02.602900 kernel: audit: type=1327 audit(1769564882.440:366): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361636131393863313236313635323063396439336464616265306163 Jan 28 01:48:02.603045 kernel: audit: type=1334 audit(1769564882.444:367): prog-id=85 op=LOAD Jan 28 01:48:02.606401 kernel: audit: type=1300 audit(1769564882.444:367): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2639 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:02.444000 audit[2669]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2639 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:02.678320 kernel: audit: type=1327 audit(1769564882.444:367): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361636131393863313236313635323063396439336464616265306163 Jan 28 01:48:02.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361636131393863313236313635323063396439336464616265306163 Jan 28 01:48:02.445000 audit: BPF prog-id=86 op=LOAD Jan 28 01:48:02.445000 audit[2669]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2639 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:02.445000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361636131393863313236313635323063396439336464616265306163 Jan 28 01:48:02.445000 audit: BPF prog-id=86 op=UNLOAD Jan 28 01:48:02.445000 audit[2669]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2639 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:02.445000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361636131393863313236313635323063396439336464616265306163 Jan 28 01:48:02.445000 audit: BPF prog-id=85 op=UNLOAD Jan 28 01:48:02.445000 audit[2669]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2639 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:02.445000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361636131393863313236313635323063396439336464616265306163 Jan 28 01:48:02.445000 audit: BPF prog-id=87 op=LOAD Jan 28 01:48:02.445000 audit[2669]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2639 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:02.445000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361636131393863313236313635323063396439336464616265306163 Jan 28 01:48:02.594000 audit: BPF prog-id=88 op=LOAD Jan 28 01:48:02.603000 audit: BPF prog-id=89 op=LOAD Jan 28 01:48:02.603000 audit[2681]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2650 pid=2681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:02.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331316163373966383630643035656334366563346234633965366131 Jan 28 01:48:02.603000 audit: BPF prog-id=89 op=UNLOAD Jan 28 01:48:02.603000 audit[2681]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2650 pid=2681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:02.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331316163373966383630643035656334366563346234633965366131 Jan 28 01:48:02.604000 audit: BPF prog-id=90 op=LOAD Jan 28 01:48:02.604000 audit[2681]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2650 pid=2681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:02.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331316163373966383630643035656334366563346234633965366131 Jan 28 01:48:02.604000 audit: BPF prog-id=91 op=LOAD Jan 28 01:48:02.604000 audit[2681]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=2650 pid=2681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:02.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331316163373966383630643035656334366563346234633965366131 Jan 28 01:48:02.604000 audit: BPF prog-id=91 op=UNLOAD Jan 28 01:48:02.604000 audit[2681]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2650 pid=2681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:02.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331316163373966383630643035656334366563346234633965366131 Jan 28 01:48:02.604000 audit: BPF prog-id=90 op=UNLOAD Jan 28 01:48:02.604000 audit[2681]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2650 pid=2681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:02.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331316163373966383630643035656334366563346234633965366131 Jan 28 01:48:02.604000 audit: BPF prog-id=92 op=LOAD Jan 28 01:48:02.604000 audit[2681]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=2650 pid=2681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:02.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331316163373966383630643035656334366563346234633965366131 Jan 28 01:48:02.837560 systemd[1]: Started cri-containerd-cd5552c8c00760aa608d4a148126f9c2f58d42c8737025f8bd979a2bf6fdf17e.scope - libcontainer container cd5552c8c00760aa608d4a148126f9c2f58d42c8737025f8bd979a2bf6fdf17e. Jan 28 01:48:03.010461 kubelet[2584]: E0128 01:48:03.008503 2584 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="3.2s" Jan 28 01:48:03.203000 audit: BPF prog-id=93 op=LOAD Jan 28 01:48:03.205000 audit: BPF prog-id=94 op=LOAD Jan 28 01:48:03.205000 audit[2706]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c238 a2=98 a3=0 items=0 ppid=2661 pid=2706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:03.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364353535326338633030373630616136303864346131343831323666 Jan 28 01:48:03.205000 audit: BPF prog-id=94 op=UNLOAD Jan 28 01:48:03.205000 audit[2706]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2661 pid=2706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:03.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364353535326338633030373630616136303864346131343831323666 Jan 28 01:48:03.206000 audit: BPF prog-id=95 op=LOAD Jan 28 01:48:03.206000 audit[2706]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c488 a2=98 a3=0 items=0 ppid=2661 pid=2706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:03.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364353535326338633030373630616136303864346131343831323666 Jan 28 01:48:03.206000 audit: BPF prog-id=96 op=LOAD Jan 28 01:48:03.206000 audit[2706]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00018c218 a2=98 a3=0 items=0 ppid=2661 pid=2706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:03.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364353535326338633030373630616136303864346131343831323666 Jan 28 01:48:03.206000 audit: BPF prog-id=96 op=UNLOAD Jan 28 01:48:03.206000 audit[2706]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2661 pid=2706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:03.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364353535326338633030373630616136303864346131343831323666 Jan 28 01:48:03.206000 audit: BPF prog-id=95 op=UNLOAD Jan 28 01:48:03.206000 audit[2706]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2661 pid=2706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:03.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364353535326338633030373630616136303864346131343831323666 Jan 28 01:48:03.206000 audit: BPF prog-id=97 op=LOAD Jan 28 01:48:03.206000 audit[2706]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c6e8 a2=98 a3=0 items=0 ppid=2661 pid=2706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:03.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364353535326338633030373630616136303864346131343831323666 Jan 28 01:48:03.218048 containerd[1609]: time="2026-01-28T01:48:03.217981377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7b8d6d985bfe094caafe61d064e436e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"c11ac79f860d05ec46ec4b4c9e6a1ed1b5ab811103337da6e739ad25cccf8ab1\"" Jan 28 01:48:03.229549 containerd[1609]: time="2026-01-28T01:48:03.226046491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"caca198c12616520c9d93ddabe0acd10628c004609f8622a0c2830fe8a8b0689\"" Jan 28 01:48:03.239348 kubelet[2584]: E0128 01:48:03.236575 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:03.239348 kubelet[2584]: E0128 01:48:03.238884 2584 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 28 01:48:03.241404 kubelet[2584]: E0128 01:48:03.240940 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:03.548602 containerd[1609]: time="2026-01-28T01:48:03.540917519Z" level=info msg="CreateContainer within sandbox \"c11ac79f860d05ec46ec4b4c9e6a1ed1b5ab811103337da6e739ad25cccf8ab1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 28 01:48:03.555782 kubelet[2584]: E0128 01:48:03.552458 2584 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 28 01:48:03.588753 containerd[1609]: time="2026-01-28T01:48:03.582545917Z" level=info msg="CreateContainer within sandbox \"caca198c12616520c9d93ddabe0acd10628c004609f8622a0c2830fe8a8b0689\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 28 01:48:03.667443 kubelet[2584]: I0128 01:48:03.665800 2584 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:48:03.668495 kubelet[2584]: E0128 01:48:03.668417 2584 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Jan 28 01:48:03.808653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1359378867.mount: Deactivated successfully. Jan 28 01:48:03.816738 containerd[1609]: time="2026-01-28T01:48:03.816616552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd5552c8c00760aa608d4a148126f9c2f58d42c8737025f8bd979a2bf6fdf17e\"" Jan 28 01:48:03.818553 containerd[1609]: time="2026-01-28T01:48:03.818518091Z" level=info msg="Container d18be323ce7bdfd7fda9d8afdb4921d85d979fadff7d33a4fe2125678c17f85d: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:48:03.834228 kubelet[2584]: E0128 01:48:03.834113 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:03.875225 kubelet[2584]: E0128 01:48:03.868474 2584 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 28 01:48:03.892006 containerd[1609]: time="2026-01-28T01:48:03.891869680Z" level=info msg="Container ae3e5280e63e23ec052efca27314281cd077747028554918ed713ea4fbb51fa8: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:48:03.900028 kubelet[2584]: E0128 01:48:03.895886 2584 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 28 01:48:03.924882 containerd[1609]: time="2026-01-28T01:48:03.924837825Z" level=info msg="CreateContainer within sandbox \"cd5552c8c00760aa608d4a148126f9c2f58d42c8737025f8bd979a2bf6fdf17e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 28 01:48:03.931783 containerd[1609]: time="2026-01-28T01:48:03.931648655Z" level=info msg="CreateContainer within sandbox \"caca198c12616520c9d93ddabe0acd10628c004609f8622a0c2830fe8a8b0689\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d18be323ce7bdfd7fda9d8afdb4921d85d979fadff7d33a4fe2125678c17f85d\"" Jan 28 01:48:03.934103 containerd[1609]: time="2026-01-28T01:48:03.934067678Z" level=info msg="StartContainer for \"d18be323ce7bdfd7fda9d8afdb4921d85d979fadff7d33a4fe2125678c17f85d\"" Jan 28 01:48:03.958117 containerd[1609]: time="2026-01-28T01:48:03.958064967Z" level=info msg="connecting to shim d18be323ce7bdfd7fda9d8afdb4921d85d979fadff7d33a4fe2125678c17f85d" address="unix:///run/containerd/s/0a14fb69ef0c1b68cba69839148f5642d0d507da47fa02f29bb022d315553e7f" protocol=ttrpc version=3 Jan 28 01:48:03.961577 containerd[1609]: time="2026-01-28T01:48:03.958630397Z" level=info msg="CreateContainer within sandbox \"c11ac79f860d05ec46ec4b4c9e6a1ed1b5ab811103337da6e739ad25cccf8ab1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ae3e5280e63e23ec052efca27314281cd077747028554918ed713ea4fbb51fa8\"" Jan 28 01:48:03.962836 containerd[1609]: time="2026-01-28T01:48:03.962810861Z" level=info msg="StartContainer for \"ae3e5280e63e23ec052efca27314281cd077747028554918ed713ea4fbb51fa8\"" Jan 28 01:48:03.976164 containerd[1609]: time="2026-01-28T01:48:03.976112782Z" level=info msg="connecting to shim ae3e5280e63e23ec052efca27314281cd077747028554918ed713ea4fbb51fa8" address="unix:///run/containerd/s/d56c9b66ca2f904b2d4bcf074131ff81c6015c3e135b2fa044907746adacdb8e" protocol=ttrpc version=3 Jan 28 01:48:04.007325 containerd[1609]: time="2026-01-28T01:48:04.005553078Z" level=info msg="Container 2f10dc0975b1cd21acae00f371fed84998a86edf5382e1bd3d0830c0022baa2c: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:48:04.114482 containerd[1609]: time="2026-01-28T01:48:04.114237317Z" level=info msg="CreateContainer within sandbox \"cd5552c8c00760aa608d4a148126f9c2f58d42c8737025f8bd979a2bf6fdf17e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2f10dc0975b1cd21acae00f371fed84998a86edf5382e1bd3d0830c0022baa2c\"" Jan 28 01:48:04.121216 containerd[1609]: time="2026-01-28T01:48:04.121048892Z" level=info msg="StartContainer for \"2f10dc0975b1cd21acae00f371fed84998a86edf5382e1bd3d0830c0022baa2c\"" Jan 28 01:48:04.122984 containerd[1609]: time="2026-01-28T01:48:04.122804951Z" level=info msg="connecting to shim 2f10dc0975b1cd21acae00f371fed84998a86edf5382e1bd3d0830c0022baa2c" address="unix:///run/containerd/s/004f2f3be3ea3e03f58e057bbaa5bebeda8a7e17fab2acdd2a25a710ace625a7" protocol=ttrpc version=3 Jan 28 01:48:04.135003 systemd[1]: Started cri-containerd-d18be323ce7bdfd7fda9d8afdb4921d85d979fadff7d33a4fe2125678c17f85d.scope - libcontainer container d18be323ce7bdfd7fda9d8afdb4921d85d979fadff7d33a4fe2125678c17f85d. Jan 28 01:48:04.188652 systemd[1]: Started cri-containerd-ae3e5280e63e23ec052efca27314281cd077747028554918ed713ea4fbb51fa8.scope - libcontainer container ae3e5280e63e23ec052efca27314281cd077747028554918ed713ea4fbb51fa8. Jan 28 01:48:04.578409 systemd[1]: Started cri-containerd-2f10dc0975b1cd21acae00f371fed84998a86edf5382e1bd3d0830c0022baa2c.scope - libcontainer container 2f10dc0975b1cd21acae00f371fed84998a86edf5382e1bd3d0830c0022baa2c. Jan 28 01:48:04.671000 audit: BPF prog-id=98 op=LOAD Jan 28 01:48:04.684000 audit: BPF prog-id=99 op=LOAD Jan 28 01:48:04.684000 audit[2764]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2639 pid=2764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:04.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431386265333233636537626466643766646139643861666462343932 Jan 28 01:48:04.684000 audit: BPF prog-id=99 op=UNLOAD Jan 28 01:48:04.684000 audit[2764]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2639 pid=2764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:04.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431386265333233636537626466643766646139643861666462343932 Jan 28 01:48:04.684000 audit: BPF prog-id=100 op=LOAD Jan 28 01:48:04.684000 audit[2764]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2639 pid=2764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:04.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431386265333233636537626466643766646139643861666462343932 Jan 28 01:48:04.684000 audit: BPF prog-id=101 op=LOAD Jan 28 01:48:04.684000 audit[2764]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2639 pid=2764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:04.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431386265333233636537626466643766646139643861666462343932 Jan 28 01:48:04.698000 audit: BPF prog-id=101 op=UNLOAD Jan 28 01:48:04.698000 audit[2764]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2639 pid=2764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:04.698000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431386265333233636537626466643766646139643861666462343932 Jan 28 01:48:04.698000 audit: BPF prog-id=100 op=UNLOAD Jan 28 01:48:04.698000 audit[2764]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2639 pid=2764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:04.698000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431386265333233636537626466643766646139643861666462343932 Jan 28 01:48:04.698000 audit: BPF prog-id=102 op=LOAD Jan 28 01:48:04.698000 audit[2764]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2639 pid=2764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:04.698000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431386265333233636537626466643766646139643861666462343932 Jan 28 01:48:04.718000 audit: BPF prog-id=103 op=LOAD Jan 28 01:48:04.763000 audit: BPF prog-id=104 op=LOAD Jan 28 01:48:04.763000 audit[2765]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2650 pid=2765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:04.763000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165336535323830653633653233656330353265666361323733313432 Jan 28 01:48:04.763000 audit: BPF prog-id=104 op=UNLOAD Jan 28 01:48:04.763000 audit[2765]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2650 pid=2765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:04.763000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165336535323830653633653233656330353265666361323733313432 Jan 28 01:48:04.763000 audit: BPF prog-id=105 op=LOAD Jan 28 01:48:04.763000 audit[2765]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2650 pid=2765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:04.763000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165336535323830653633653233656330353265666361323733313432 Jan 28 01:48:04.809000 audit: BPF prog-id=106 op=LOAD Jan 28 01:48:04.809000 audit[2765]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=2650 pid=2765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:04.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165336535323830653633653233656330353265666361323733313432 Jan 28 01:48:04.809000 audit: BPF prog-id=106 op=UNLOAD Jan 28 01:48:04.809000 audit[2765]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2650 pid=2765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:04.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165336535323830653633653233656330353265666361323733313432 Jan 28 01:48:04.812000 audit: BPF prog-id=105 op=UNLOAD Jan 28 01:48:04.812000 audit[2765]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2650 pid=2765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:04.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165336535323830653633653233656330353265666361323733313432 Jan 28 01:48:04.812000 audit: BPF prog-id=107 op=LOAD Jan 28 01:48:04.812000 audit[2765]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=2650 pid=2765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:04.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165336535323830653633653233656330353265666361323733313432 Jan 28 01:48:04.846000 audit: BPF prog-id=108 op=LOAD Jan 28 01:48:04.869000 audit: BPF prog-id=109 op=LOAD Jan 28 01:48:04.869000 audit[2784]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2661 pid=2784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:04.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266313064633039373562316364323161636165303066333731666564 Jan 28 01:48:04.869000 audit: BPF prog-id=109 op=UNLOAD Jan 28 01:48:04.869000 audit[2784]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2661 pid=2784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:04.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266313064633039373562316364323161636165303066333731666564 Jan 28 01:48:04.869000 audit: BPF prog-id=110 op=LOAD Jan 28 01:48:04.869000 audit[2784]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2661 pid=2784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:04.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266313064633039373562316364323161636165303066333731666564 Jan 28 01:48:04.869000 audit: BPF prog-id=111 op=LOAD Jan 28 01:48:04.869000 audit[2784]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2661 pid=2784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:04.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266313064633039373562316364323161636165303066333731666564 Jan 28 01:48:04.869000 audit: BPF prog-id=111 op=UNLOAD Jan 28 01:48:04.869000 audit[2784]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2661 pid=2784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:04.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266313064633039373562316364323161636165303066333731666564 Jan 28 01:48:04.869000 audit: BPF prog-id=110 op=UNLOAD Jan 28 01:48:04.869000 audit[2784]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2661 pid=2784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:04.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266313064633039373562316364323161636165303066333731666564 Jan 28 01:48:04.869000 audit: BPF prog-id=112 op=LOAD Jan 28 01:48:04.869000 audit[2784]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2661 pid=2784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:04.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266313064633039373562316364323161636165303066333731666564 Jan 28 01:48:05.439831 containerd[1609]: time="2026-01-28T01:48:05.427879704Z" level=info msg="StartContainer for \"ae3e5280e63e23ec052efca27314281cd077747028554918ed713ea4fbb51fa8\" returns successfully" Jan 28 01:48:05.760021 containerd[1609]: time="2026-01-28T01:48:05.754076647Z" level=info msg="StartContainer for \"d18be323ce7bdfd7fda9d8afdb4921d85d979fadff7d33a4fe2125678c17f85d\" returns successfully" Jan 28 01:48:06.083557 kubelet[2584]: E0128 01:48:06.033555 2584 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.85:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 28 01:48:06.105852 containerd[1609]: time="2026-01-28T01:48:06.105163634Z" level=info msg="StartContainer for \"2f10dc0975b1cd21acae00f371fed84998a86edf5382e1bd3d0830c0022baa2c\" returns successfully" Jan 28 01:48:06.198436 kubelet[2584]: E0128 01:48:06.198321 2584 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:48:06.208304 kubelet[2584]: E0128 01:48:06.207956 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:06.271262 kubelet[2584]: E0128 01:48:06.271004 2584 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:48:06.282393 kubelet[2584]: E0128 01:48:06.277470 2584 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="6.4s" Jan 28 01:48:06.293781 kubelet[2584]: E0128 01:48:06.290979 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:06.305968 kubelet[2584]: E0128 01:48:06.305929 2584 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:48:06.306781 kubelet[2584]: E0128 01:48:06.306752 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:06.911944 kubelet[2584]: I0128 01:48:06.910816 2584 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:48:07.371049 kubelet[2584]: E0128 01:48:07.368446 2584 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:48:07.371049 kubelet[2584]: E0128 01:48:07.368949 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:07.458328 kubelet[2584]: E0128 01:48:07.453532 2584 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:48:07.475231 kubelet[2584]: E0128 01:48:07.461219 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:07.488346 kubelet[2584]: E0128 01:48:07.486389 2584 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:48:07.510236 kubelet[2584]: E0128 01:48:07.495916 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:08.378744 kubelet[2584]: E0128 01:48:08.378054 2584 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:48:08.378744 kubelet[2584]: E0128 01:48:08.378311 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:08.382733 kubelet[2584]: E0128 01:48:08.379314 2584 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:48:08.382733 kubelet[2584]: E0128 01:48:08.379617 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:08.705524 kubelet[2584]: E0128 01:48:08.704039 2584 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:48:08.705524 kubelet[2584]: E0128 01:48:08.704331 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:09.995423 kubelet[2584]: E0128 01:48:09.995130 2584 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:48:09.999875 kubelet[2584]: E0128 01:48:09.998760 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:10.454936 kubelet[2584]: E0128 01:48:10.441455 2584 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 01:48:10.832069 kubelet[2584]: E0128 01:48:10.831244 2584 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:48:10.839753 kubelet[2584]: E0128 01:48:10.837631 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:16.863805 kubelet[2584]: E0128 01:48:16.862280 2584 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 28 01:48:16.921247 kubelet[2584]: E0128 01:48:16.918607 2584 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 28 01:48:17.982481 kubelet[2584]: E0128 01:48:17.978742 2584 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 28 01:48:18.288082 kubelet[2584]: E0128 01:48:18.193034 2584 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.85:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.188ec1e1f824e092 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 01:47:59.878152338 +0000 UTC m=+4.410266344,LastTimestamp:2026-01-28 01:47:59.878152338 +0000 UTC m=+4.410266344,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 01:48:18.417335 kubelet[2584]: E0128 01:48:18.415401 2584 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 28 01:48:18.751329 kubelet[2584]: E0128 01:48:18.751248 2584 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 28 01:48:18.803533 kubelet[2584]: E0128 01:48:18.802530 2584 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:48:18.803533 kubelet[2584]: E0128 01:48:18.803465 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:20.447785 kubelet[2584]: E0128 01:48:20.442529 2584 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 01:48:20.989550 kubelet[2584]: E0128 01:48:20.989472 2584 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 28 01:48:21.193392 kubelet[2584]: E0128 01:48:21.192784 2584 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:48:21.193392 kubelet[2584]: E0128 01:48:21.192932 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:21.573134 kubelet[2584]: E0128 01:48:21.571428 2584 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 28 01:48:22.083963 kubelet[2584]: E0128 01:48:22.083649 2584 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 28 01:48:22.759476 kubelet[2584]: E0128 01:48:22.758825 2584 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 28 01:48:23.339383 kubelet[2584]: I0128 01:48:23.339254 2584 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:48:23.415082 kubelet[2584]: I0128 01:48:23.414789 2584 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 28 01:48:23.415913 kubelet[2584]: E0128 01:48:23.415404 2584 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 28 01:48:23.510992 kubelet[2584]: E0128 01:48:23.510863 2584 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:48:23.614448 kubelet[2584]: E0128 01:48:23.612971 2584 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:48:23.714929 kubelet[2584]: E0128 01:48:23.714624 2584 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:48:23.827923 kubelet[2584]: E0128 01:48:23.825873 2584 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:48:23.932513 kubelet[2584]: E0128 01:48:23.930023 2584 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:48:24.033410 kubelet[2584]: E0128 01:48:24.030647 2584 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:48:24.139648 kubelet[2584]: E0128 01:48:24.132244 2584 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:48:24.237292 kubelet[2584]: E0128 01:48:24.237243 2584 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:48:24.342437 kubelet[2584]: E0128 01:48:24.342370 2584 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:48:24.445909 kubelet[2584]: E0128 01:48:24.445842 2584 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:48:24.547111 kubelet[2584]: E0128 01:48:24.546897 2584 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:48:24.653883 kubelet[2584]: E0128 01:48:24.653817 2584 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:48:24.767816 kubelet[2584]: E0128 01:48:24.767751 2584 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:48:24.868495 kubelet[2584]: E0128 01:48:24.867949 2584 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:48:24.969332 kubelet[2584]: E0128 01:48:24.969036 2584 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:48:25.072716 kubelet[2584]: E0128 01:48:25.072553 2584 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:48:25.174501 kubelet[2584]: E0128 01:48:25.173300 2584 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:48:25.274914 kubelet[2584]: E0128 01:48:25.274609 2584 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:48:25.380660 kubelet[2584]: E0128 01:48:25.380386 2584 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:48:25.480979 kubelet[2584]: E0128 01:48:25.480506 2584 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:48:25.581213 kubelet[2584]: E0128 01:48:25.580770 2584 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:48:25.682797 kubelet[2584]: E0128 01:48:25.682077 2584 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:48:25.785819 kubelet[2584]: E0128 01:48:25.782455 2584 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:48:25.886968 kubelet[2584]: E0128 01:48:25.885522 2584 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:48:25.991242 kubelet[2584]: I0128 01:48:25.990453 2584 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 28 01:48:26.198884 kubelet[2584]: I0128 01:48:26.198030 2584 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 28 01:48:26.564992 kubelet[2584]: I0128 01:48:26.564144 2584 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 28 01:48:26.941894 kubelet[2584]: I0128 01:48:26.941357 2584 apiserver.go:52] "Watching apiserver" Jan 28 01:48:26.976762 kubelet[2584]: E0128 01:48:26.976591 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:26.977804 kubelet[2584]: E0128 01:48:26.977776 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:26.993293 kubelet[2584]: E0128 01:48:26.978106 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:26.993293 kubelet[2584]: I0128 01:48:26.988969 2584 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 01:48:30.148022 systemd[1]: Reload requested from client PID 2876 ('systemctl') (unit session-8.scope)... Jan 28 01:48:30.149112 systemd[1]: Reloading... Jan 28 01:48:30.807052 zram_generator::config[2922]: No configuration found. Jan 28 01:48:30.840561 kubelet[2584]: E0128 01:48:30.837223 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:30.842139 kubelet[2584]: I0128 01:48:30.839534 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.839516745 podStartE2EDuration="4.839516745s" podCreationTimestamp="2026-01-28 01:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:48:30.560043604 +0000 UTC m=+35.092157610" watchObservedRunningTime="2026-01-28 01:48:30.839516745 +0000 UTC m=+35.371630772" Jan 28 01:48:30.842139 kubelet[2584]: I0128 01:48:30.841540 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.841523434 podStartE2EDuration="4.841523434s" podCreationTimestamp="2026-01-28 01:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:48:30.829022576 +0000 UTC m=+35.361136583" watchObservedRunningTime="2026-01-28 01:48:30.841523434 +0000 UTC m=+35.373637450" Jan 28 01:48:30.942774 kubelet[2584]: I0128 01:48:30.942583 2584 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.9425645419999995 podStartE2EDuration="4.942564542s" podCreationTimestamp="2026-01-28 01:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:48:30.942136765 +0000 UTC m=+35.474250771" watchObservedRunningTime="2026-01-28 01:48:30.942564542 +0000 UTC m=+35.474678568" Jan 28 01:48:30.986484 kubelet[2584]: E0128 01:48:30.984639 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:32.031377 kubelet[2584]: E0128 01:48:32.028403 2584 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:32.171260 systemd[1]: Reloading finished in 2019 ms. Jan 28 01:48:32.307096 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:48:32.355061 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 01:48:32.355572 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:48:32.355653 systemd[1]: kubelet.service: Consumed 5.772s CPU time, 133.9M memory peak. Jan 28 01:48:32.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:48:32.364486 kernel: kauditd_printk_skb: 122 callbacks suppressed Jan 28 01:48:32.364630 kernel: audit: type=1131 audit(1769564912.353:412): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:48:32.374033 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:48:32.373000 audit: BPF prog-id=113 op=LOAD Jan 28 01:48:32.396578 kernel: audit: type=1334 audit(1769564912.373:413): prog-id=113 op=LOAD Jan 28 01:48:32.396754 kernel: audit: type=1334 audit(1769564912.373:414): prog-id=65 op=UNLOAD Jan 28 01:48:32.373000 audit: BPF prog-id=65 op=UNLOAD Jan 28 01:48:32.415560 kernel: audit: type=1334 audit(1769564912.381:415): prog-id=114 op=LOAD Jan 28 01:48:32.415927 kernel: audit: type=1334 audit(1769564912.381:416): prog-id=75 op=UNLOAD Jan 28 01:48:32.415992 kernel: audit: type=1334 audit(1769564912.381:417): prog-id=115 op=LOAD Jan 28 01:48:32.416017 kernel: audit: type=1334 audit(1769564912.381:418): prog-id=116 op=LOAD Jan 28 01:48:32.416051 kernel: audit: type=1334 audit(1769564912.381:419): prog-id=76 op=UNLOAD Jan 28 01:48:32.416084 kernel: audit: type=1334 audit(1769564912.381:420): prog-id=77 op=UNLOAD Jan 28 01:48:32.416124 kernel: audit: type=1334 audit(1769564912.384:421): prog-id=117 op=LOAD Jan 28 01:48:32.381000 audit: BPF prog-id=114 op=LOAD Jan 28 01:48:32.381000 audit: BPF prog-id=75 op=UNLOAD Jan 28 01:48:32.381000 audit: BPF prog-id=115 op=LOAD Jan 28 01:48:32.381000 audit: BPF prog-id=116 op=LOAD Jan 28 01:48:32.381000 audit: BPF prog-id=76 op=UNLOAD Jan 28 01:48:32.381000 audit: BPF prog-id=77 op=UNLOAD Jan 28 01:48:32.384000 audit: BPF prog-id=117 op=LOAD Jan 28 01:48:32.384000 audit: BPF prog-id=69 op=UNLOAD Jan 28 01:48:32.384000 audit: BPF prog-id=118 op=LOAD Jan 28 01:48:32.384000 audit: BPF prog-id=119 op=LOAD Jan 28 01:48:32.384000 audit: BPF prog-id=70 op=UNLOAD Jan 28 01:48:32.384000 audit: BPF prog-id=71 op=UNLOAD Jan 28 01:48:32.392000 audit: BPF prog-id=120 op=LOAD Jan 28 01:48:32.392000 audit: BPF prog-id=78 op=UNLOAD Jan 28 01:48:32.403000 audit: BPF prog-id=121 op=LOAD Jan 28 01:48:32.403000 audit: BPF prog-id=80 op=UNLOAD Jan 28 01:48:32.403000 audit: BPF prog-id=122 op=LOAD Jan 28 01:48:32.403000 audit: BPF prog-id=123 op=LOAD Jan 28 01:48:32.403000 audit: BPF prog-id=81 op=UNLOAD Jan 28 01:48:32.404000 audit: BPF prog-id=82 op=UNLOAD Jan 28 01:48:32.408000 audit: BPF prog-id=124 op=LOAD Jan 28 01:48:32.408000 audit: BPF prog-id=66 op=UNLOAD Jan 28 01:48:32.409000 audit: BPF prog-id=125 op=LOAD Jan 28 01:48:32.409000 audit: BPF prog-id=126 op=LOAD Jan 28 01:48:32.409000 audit: BPF prog-id=67 op=UNLOAD Jan 28 01:48:32.409000 audit: BPF prog-id=68 op=UNLOAD Jan 28 01:48:32.409000 audit: BPF prog-id=127 op=LOAD Jan 28 01:48:32.409000 audit: BPF prog-id=128 op=LOAD Jan 28 01:48:32.409000 audit: BPF prog-id=63 op=UNLOAD Jan 28 01:48:32.410000 audit: BPF prog-id=64 op=UNLOAD Jan 28 01:48:32.414000 audit: BPF prog-id=129 op=LOAD Jan 28 01:48:32.415000 audit: BPF prog-id=72 op=UNLOAD Jan 28 01:48:32.415000 audit: BPF prog-id=130 op=LOAD Jan 28 01:48:32.415000 audit: BPF prog-id=131 op=LOAD Jan 28 01:48:32.415000 audit: BPF prog-id=73 op=UNLOAD Jan 28 01:48:32.415000 audit: BPF prog-id=74 op=UNLOAD Jan 28 01:48:32.416000 audit: BPF prog-id=132 op=LOAD Jan 28 01:48:32.416000 audit: BPF prog-id=79 op=UNLOAD Jan 28 01:48:33.133985 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:48:33.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:48:33.163527 (kubelet)[2967]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 01:48:33.540200 kubelet[2967]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:48:33.540200 kubelet[2967]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 01:48:33.540200 kubelet[2967]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:48:33.540200 kubelet[2967]: I0128 01:48:33.536001 2967 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 01:48:33.593831 kubelet[2967]: I0128 01:48:33.593658 2967 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 28 01:48:33.596763 kubelet[2967]: I0128 01:48:33.594035 2967 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 01:48:33.596763 kubelet[2967]: I0128 01:48:33.594440 2967 server.go:956] "Client rotation is on, will bootstrap in background" Jan 28 01:48:33.596763 kubelet[2967]: I0128 01:48:33.596282 2967 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 28 01:48:33.600073 kubelet[2967]: I0128 01:48:33.600044 2967 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 01:48:33.713283 kubelet[2967]: I0128 01:48:33.709771 2967 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 01:48:33.753895 kubelet[2967]: I0128 01:48:33.752000 2967 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 01:48:33.757490 kubelet[2967]: I0128 01:48:33.755583 2967 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 01:48:33.757490 kubelet[2967]: I0128 01:48:33.755631 2967 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 01:48:33.757490 kubelet[2967]: I0128 01:48:33.756007 2967 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 01:48:33.757490 kubelet[2967]: I0128 01:48:33.756024 2967 container_manager_linux.go:303] "Creating device plugin manager" Jan 28 01:48:33.757490 kubelet[2967]: I0128 01:48:33.756104 2967 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:48:33.761081 kubelet[2967]: I0128 01:48:33.760537 2967 kubelet.go:480] "Attempting to sync node with API server" Jan 28 01:48:33.761081 kubelet[2967]: I0128 01:48:33.760583 2967 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 01:48:33.761081 kubelet[2967]: I0128 01:48:33.760608 2967 kubelet.go:386] "Adding apiserver pod source" Jan 28 01:48:33.761081 kubelet[2967]: I0128 01:48:33.760625 2967 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 01:48:33.766448 kubelet[2967]: I0128 01:48:33.766398 2967 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 28 01:48:33.773556 kubelet[2967]: I0128 01:48:33.772906 2967 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 28 01:48:33.849471 kubelet[2967]: I0128 01:48:33.835072 2967 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 01:48:33.861125 kubelet[2967]: I0128 01:48:33.860973 2967 server.go:1289] "Started kubelet" Jan 28 01:48:33.863357 kubelet[2967]: I0128 01:48:33.863006 2967 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 01:48:33.863473 kubelet[2967]: E0128 01:48:33.863415 2967 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 01:48:33.863811 kubelet[2967]: I0128 01:48:33.863543 2967 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 01:48:33.863811 kubelet[2967]: I0128 01:48:33.863753 2967 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 01:48:33.865311 kubelet[2967]: I0128 01:48:33.864961 2967 server.go:317] "Adding debug handlers to kubelet server" Jan 28 01:48:33.920610 kubelet[2967]: I0128 01:48:33.914614 2967 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 01:48:33.920610 kubelet[2967]: I0128 01:48:33.916583 2967 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 01:48:33.931251 kubelet[2967]: I0128 01:48:33.931014 2967 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 01:48:33.935126 kubelet[2967]: I0128 01:48:33.934852 2967 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 01:48:33.935126 kubelet[2967]: I0128 01:48:33.935115 2967 reconciler.go:26] "Reconciler: start to sync state" Jan 28 01:48:33.936283 kubelet[2967]: I0128 01:48:33.936247 2967 factory.go:223] Registration of the systemd container factory successfully Jan 28 01:48:33.936741 kubelet[2967]: I0128 01:48:33.936549 2967 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 01:48:33.945577 kubelet[2967]: I0128 01:48:33.945323 2967 factory.go:223] Registration of the containerd container factory successfully Jan 28 01:48:34.130572 kubelet[2967]: I0128 01:48:34.130082 2967 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 28 01:48:34.153051 kubelet[2967]: I0128 01:48:34.153004 2967 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 28 01:48:34.180865 kubelet[2967]: I0128 01:48:34.180054 2967 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 28 01:48:34.188648 kubelet[2967]: I0128 01:48:34.184844 2967 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 01:48:34.188648 kubelet[2967]: I0128 01:48:34.184870 2967 kubelet.go:2436] "Starting kubelet main sync loop" Jan 28 01:48:34.188648 kubelet[2967]: E0128 01:48:34.184940 2967 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 01:48:34.289576 kubelet[2967]: E0128 01:48:34.288124 2967 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 01:48:34.494462 kubelet[2967]: E0128 01:48:34.491802 2967 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 01:48:34.840801 kubelet[2967]: I0128 01:48:34.829600 2967 apiserver.go:52] "Watching apiserver" Jan 28 01:48:34.891940 kubelet[2967]: E0128 01:48:34.891902 2967 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 01:48:35.057353 kubelet[2967]: I0128 01:48:35.056434 2967 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 01:48:35.066972 kubelet[2967]: I0128 01:48:35.056910 2967 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 01:48:35.066972 kubelet[2967]: I0128 01:48:35.066785 2967 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:48:35.071875 kubelet[2967]: I0128 01:48:35.067511 2967 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 28 01:48:35.071875 kubelet[2967]: I0128 01:48:35.067527 2967 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 28 01:48:35.071875 kubelet[2967]: I0128 01:48:35.067550 2967 policy_none.go:49] "None policy: Start" Jan 28 01:48:35.071875 kubelet[2967]: I0128 01:48:35.067564 2967 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 01:48:35.071875 kubelet[2967]: I0128 01:48:35.067577 2967 state_mem.go:35] "Initializing new in-memory state store" Jan 28 01:48:35.120497 kubelet[2967]: I0128 01:48:35.103865 2967 state_mem.go:75] "Updated machine memory state" Jan 28 01:48:35.194122 kubelet[2967]: E0128 01:48:35.193935 2967 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 28 01:48:35.194346 kubelet[2967]: I0128 01:48:35.194335 2967 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 01:48:35.194401 kubelet[2967]: I0128 01:48:35.194359 2967 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 01:48:35.199415 kubelet[2967]: I0128 01:48:35.196147 2967 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 01:48:35.227546 kubelet[2967]: E0128 01:48:35.224877 2967 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 01:48:35.474061 kubelet[2967]: I0128 01:48:35.460993 2967 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:48:35.612988 kubelet[2967]: I0128 01:48:35.611849 2967 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 28 01:48:35.612988 kubelet[2967]: I0128 01:48:35.611959 2967 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 28 01:48:35.612988 kubelet[2967]: I0128 01:48:35.611986 2967 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 28 01:48:35.639131 containerd[1609]: time="2026-01-28T01:48:35.620875131Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 28 01:48:35.639851 kubelet[2967]: I0128 01:48:35.621939 2967 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 28 01:48:35.707035 kubelet[2967]: I0128 01:48:35.705333 2967 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 28 01:48:35.719974 kubelet[2967]: I0128 01:48:35.719863 2967 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 28 01:48:35.848434 kubelet[2967]: I0128 01:48:35.826234 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:48:35.848434 kubelet[2967]: I0128 01:48:35.837585 2967 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 01:48:35.865121 kubelet[2967]: I0128 01:48:35.835582 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:48:35.865121 kubelet[2967]: I0128 01:48:35.856138 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b8d6d985bfe094caafe61d064e436e9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7b8d6d985bfe094caafe61d064e436e9\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:48:35.865121 kubelet[2967]: I0128 01:48:35.857565 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b8d6d985bfe094caafe61d064e436e9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7b8d6d985bfe094caafe61d064e436e9\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:48:35.865121 kubelet[2967]: I0128 01:48:35.857623 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:48:35.865121 kubelet[2967]: I0128 01:48:35.857767 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:48:35.875034 kubelet[2967]: I0128 01:48:35.858016 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 28 01:48:35.875034 kubelet[2967]: I0128 01:48:35.858044 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b8d6d985bfe094caafe61d064e436e9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7b8d6d985bfe094caafe61d064e436e9\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:48:35.875034 kubelet[2967]: I0128 01:48:35.858070 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:48:35.936542 kubelet[2967]: E0128 01:48:35.930479 2967 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 28 01:48:35.936542 kubelet[2967]: E0128 01:48:35.930476 2967 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 28 01:48:36.021790 kubelet[2967]: E0128 01:48:36.019548 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:36.061948 kubelet[2967]: I0128 01:48:36.060295 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d260f6b1-95f6-48e4-b6ea-e35bc12ff3b9-kube-proxy\") pod \"kube-proxy-hd79n\" (UID: \"d260f6b1-95f6-48e4-b6ea-e35bc12ff3b9\") " pod="kube-system/kube-proxy-hd79n" Jan 28 01:48:36.061948 kubelet[2967]: I0128 01:48:36.060353 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d260f6b1-95f6-48e4-b6ea-e35bc12ff3b9-xtables-lock\") pod \"kube-proxy-hd79n\" (UID: \"d260f6b1-95f6-48e4-b6ea-e35bc12ff3b9\") " pod="kube-system/kube-proxy-hd79n" Jan 28 01:48:36.061948 kubelet[2967]: I0128 01:48:36.060384 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zthkc\" (UniqueName: \"kubernetes.io/projected/d260f6b1-95f6-48e4-b6ea-e35bc12ff3b9-kube-api-access-zthkc\") pod \"kube-proxy-hd79n\" (UID: \"d260f6b1-95f6-48e4-b6ea-e35bc12ff3b9\") " pod="kube-system/kube-proxy-hd79n" Jan 28 01:48:36.061948 kubelet[2967]: I0128 01:48:36.060426 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d260f6b1-95f6-48e4-b6ea-e35bc12ff3b9-lib-modules\") pod \"kube-proxy-hd79n\" (UID: \"d260f6b1-95f6-48e4-b6ea-e35bc12ff3b9\") " pod="kube-system/kube-proxy-hd79n" Jan 28 01:48:36.219924 systemd[1]: Created slice kubepods-besteffort-podd260f6b1_95f6_48e4_b6ea_e35bc12ff3b9.slice - libcontainer container kubepods-besteffort-podd260f6b1_95f6_48e4_b6ea_e35bc12ff3b9.slice. Jan 28 01:48:36.244409 kubelet[2967]: E0128 01:48:36.234575 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:36.244958 kubelet[2967]: E0128 01:48:36.244929 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:36.395061 kubelet[2967]: E0128 01:48:36.395021 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:36.395971 kubelet[2967]: E0128 01:48:36.395943 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:36.410594 kubelet[2967]: E0128 01:48:36.408236 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:36.624580 kubelet[2967]: E0128 01:48:36.618327 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:36.627279 containerd[1609]: time="2026-01-28T01:48:36.627064916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hd79n,Uid:d260f6b1-95f6-48e4-b6ea-e35bc12ff3b9,Namespace:kube-system,Attempt:0,}" Jan 28 01:48:37.026007 containerd[1609]: time="2026-01-28T01:48:37.025936053Z" level=info msg="connecting to shim 093e89c286785bc3f94f60a7fe125e3cfeee525feb77d509ee36ea7535449aee" address="unix:///run/containerd/s/840a56268daa7c6ed781a4c6f1a4d309a085a109d3ac902ab254c59a74324a49" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:48:37.334841 kubelet[2967]: E0128 01:48:37.332019 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:37.442269 systemd[1]: Started cri-containerd-093e89c286785bc3f94f60a7fe125e3cfeee525feb77d509ee36ea7535449aee.scope - libcontainer container 093e89c286785bc3f94f60a7fe125e3cfeee525feb77d509ee36ea7535449aee. Jan 28 01:48:37.559083 kubelet[2967]: E0128 01:48:37.522308 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:37.649000 audit: BPF prog-id=133 op=LOAD Jan 28 01:48:37.661344 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 28 01:48:37.661477 kernel: audit: type=1334 audit(1769564917.649:454): prog-id=133 op=LOAD Jan 28 01:48:37.673550 kernel: audit: type=1334 audit(1769564917.657:455): prog-id=134 op=LOAD Jan 28 01:48:37.657000 audit: BPF prog-id=134 op=LOAD Jan 28 01:48:37.657000 audit[3037]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=3024 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:37.721010 kernel: audit: type=1300 audit(1769564917.657:455): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=3024 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:37.721145 kernel: audit: type=1327 audit(1769564917.657:455): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039336538396332383637383562633366393466363061376665313235 Jan 28 01:48:37.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039336538396332383637383562633366393466363061376665313235 Jan 28 01:48:37.735888 kernel: audit: type=1334 audit(1769564917.657:456): prog-id=134 op=UNLOAD Jan 28 01:48:37.657000 audit: BPF prog-id=134 op=UNLOAD Jan 28 01:48:37.756007 kernel: audit: type=1300 audit(1769564917.657:456): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3024 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:37.657000 audit[3037]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3024 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:37.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039336538396332383637383562633366393466363061376665313235 Jan 28 01:48:37.799607 kernel: audit: type=1327 audit(1769564917.657:456): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039336538396332383637383562633366393466363061376665313235 Jan 28 01:48:37.666000 audit: BPF prog-id=135 op=LOAD Jan 28 01:48:37.666000 audit[3037]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3024 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:37.831658 kernel: audit: type=1334 audit(1769564917.666:457): prog-id=135 op=LOAD Jan 28 01:48:37.831888 kernel: audit: type=1300 audit(1769564917.666:457): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3024 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:37.831936 kernel: audit: type=1327 audit(1769564917.666:457): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039336538396332383637383562633366393466363061376665313235 Jan 28 01:48:37.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039336538396332383637383562633366393466363061376665313235 Jan 28 01:48:37.666000 audit: BPF prog-id=136 op=LOAD Jan 28 01:48:37.666000 audit[3037]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=3024 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:37.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039336538396332383637383562633366393466363061376665313235 Jan 28 01:48:37.666000 audit: BPF prog-id=136 op=UNLOAD Jan 28 01:48:37.666000 audit[3037]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3024 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:37.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039336538396332383637383562633366393466363061376665313235 Jan 28 01:48:37.666000 audit: BPF prog-id=135 op=UNLOAD Jan 28 01:48:37.666000 audit[3037]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3024 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:37.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039336538396332383637383562633366393466363061376665313235 Jan 28 01:48:37.666000 audit: BPF prog-id=137 op=LOAD Jan 28 01:48:37.666000 audit[3037]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=3024 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:37.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039336538396332383637383562633366393466363061376665313235 Jan 28 01:48:38.102469 containerd[1609]: time="2026-01-28T01:48:38.102040916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hd79n,Uid:d260f6b1-95f6-48e4-b6ea-e35bc12ff3b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"093e89c286785bc3f94f60a7fe125e3cfeee525feb77d509ee36ea7535449aee\"" Jan 28 01:48:38.137791 kubelet[2967]: E0128 01:48:38.113414 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:38.166823 containerd[1609]: time="2026-01-28T01:48:38.166639908Z" level=info msg="CreateContainer within sandbox \"093e89c286785bc3f94f60a7fe125e3cfeee525feb77d509ee36ea7535449aee\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 28 01:48:38.343737 containerd[1609]: time="2026-01-28T01:48:38.341876110Z" level=info msg="Container b4d34c9f6d4c21cdb9cd5c5c0d8789a6e14f85ccd88751203c4be1a70d14f32c: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:48:38.354874 kubelet[2967]: E0128 01:48:38.354017 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:38.375433 kubelet[2967]: E0128 01:48:38.369649 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:38.504044 containerd[1609]: time="2026-01-28T01:48:38.503897199Z" level=info msg="CreateContainer within sandbox \"093e89c286785bc3f94f60a7fe125e3cfeee525feb77d509ee36ea7535449aee\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b4d34c9f6d4c21cdb9cd5c5c0d8789a6e14f85ccd88751203c4be1a70d14f32c\"" Jan 28 01:48:38.542802 containerd[1609]: time="2026-01-28T01:48:38.520266078Z" level=info msg="StartContainer for \"b4d34c9f6d4c21cdb9cd5c5c0d8789a6e14f85ccd88751203c4be1a70d14f32c\"" Jan 28 01:48:38.544274 containerd[1609]: time="2026-01-28T01:48:38.544110846Z" level=info msg="connecting to shim b4d34c9f6d4c21cdb9cd5c5c0d8789a6e14f85ccd88751203c4be1a70d14f32c" address="unix:///run/containerd/s/840a56268daa7c6ed781a4c6f1a4d309a085a109d3ac902ab254c59a74324a49" protocol=ttrpc version=3 Jan 28 01:48:38.950240 systemd[1]: Started cri-containerd-b4d34c9f6d4c21cdb9cd5c5c0d8789a6e14f85ccd88751203c4be1a70d14f32c.scope - libcontainer container b4d34c9f6d4c21cdb9cd5c5c0d8789a6e14f85ccd88751203c4be1a70d14f32c. Jan 28 01:48:39.483000 audit: BPF prog-id=138 op=LOAD Jan 28 01:48:39.483000 audit[3065]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3024 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:39.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234643334633966366434633231636462396364356335633064383738 Jan 28 01:48:39.492000 audit: BPF prog-id=139 op=LOAD Jan 28 01:48:39.492000 audit[3065]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=3024 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:39.492000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234643334633966366434633231636462396364356335633064383738 Jan 28 01:48:39.492000 audit: BPF prog-id=139 op=UNLOAD Jan 28 01:48:39.492000 audit[3065]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3024 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:39.492000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234643334633966366434633231636462396364356335633064383738 Jan 28 01:48:39.493000 audit: BPF prog-id=138 op=UNLOAD Jan 28 01:48:39.493000 audit[3065]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3024 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:39.493000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234643334633966366434633231636462396364356335633064383738 Jan 28 01:48:39.493000 audit: BPF prog-id=140 op=LOAD Jan 28 01:48:39.493000 audit[3065]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=3024 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:39.493000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234643334633966366434633231636462396364356335633064383738 Jan 28 01:48:39.722458 containerd[1609]: time="2026-01-28T01:48:39.722358482Z" level=info msg="StartContainer for \"b4d34c9f6d4c21cdb9cd5c5c0d8789a6e14f85ccd88751203c4be1a70d14f32c\" returns successfully" Jan 28 01:48:40.503147 kubelet[2967]: E0128 01:48:40.502934 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:41.072762 kubelet[2967]: I0128 01:48:41.072354 2967 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hd79n" podStartSLOduration=6.072323711 podStartE2EDuration="6.072323711s" podCreationTimestamp="2026-01-28 01:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:48:40.804528378 +0000 UTC m=+7.409440067" watchObservedRunningTime="2026-01-28 01:48:41.072323711 +0000 UTC m=+7.677235380" Jan 28 01:48:41.114134 systemd[1]: Created slice kubepods-besteffort-podb931dda2_4f4f_40f1_a4a9_4f772efe9eb9.slice - libcontainer container kubepods-besteffort-podb931dda2_4f4f_40f1_a4a9_4f772efe9eb9.slice. Jan 28 01:48:41.185433 kubelet[2967]: I0128 01:48:41.185286 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b931dda2-4f4f-40f1-a4a9-4f772efe9eb9-var-lib-calico\") pod \"tigera-operator-7dcd859c48-25ww8\" (UID: \"b931dda2-4f4f-40f1-a4a9-4f772efe9eb9\") " pod="tigera-operator/tigera-operator-7dcd859c48-25ww8" Jan 28 01:48:41.187016 kubelet[2967]: I0128 01:48:41.186960 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbwv5\" (UniqueName: \"kubernetes.io/projected/b931dda2-4f4f-40f1-a4a9-4f772efe9eb9-kube-api-access-jbwv5\") pod \"tigera-operator-7dcd859c48-25ww8\" (UID: \"b931dda2-4f4f-40f1-a4a9-4f772efe9eb9\") " pod="tigera-operator/tigera-operator-7dcd859c48-25ww8" Jan 28 01:48:41.546352 kubelet[2967]: E0128 01:48:41.531627 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:41.841353 containerd[1609]: time="2026-01-28T01:48:41.834330604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-25ww8,Uid:b931dda2-4f4f-40f1-a4a9-4f772efe9eb9,Namespace:tigera-operator,Attempt:0,}" Jan 28 01:48:41.905000 audit[3145]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3145 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:48:41.905000 audit[3145]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc9b8b2ee0 a2=0 a3=7ffc9b8b2ecc items=0 ppid=3078 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:41.905000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 28 01:48:41.908000 audit[3144]: NETFILTER_CFG table=mangle:55 family=2 entries=1 op=nft_register_chain pid=3144 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:48:41.908000 audit[3144]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffda8c9d0b0 a2=0 a3=7ffda8c9d09c items=0 ppid=3078 pid=3144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:41.908000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 28 01:48:41.913000 audit[3150]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=3150 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:48:41.913000 audit[3150]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcecc8c620 a2=0 a3=7ffcecc8c60c items=0 ppid=3078 pid=3150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:41.913000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 28 01:48:41.921000 audit[3151]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=3151 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:48:41.921000 audit[3151]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcf8be8c10 a2=0 a3=7ffcf8be8bfc items=0 ppid=3078 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:41.921000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 28 01:48:41.929000 audit[3153]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_chain pid=3153 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:48:41.929000 audit[3153]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd8dfff630 a2=0 a3=7ffd8dfff61c items=0 ppid=3078 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:41.929000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 28 01:48:41.931000 audit[3154]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3154 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:48:41.931000 audit[3154]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdaa367040 a2=0 a3=7ffdaa36702c items=0 ppid=3078 pid=3154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:41.931000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 28 01:48:41.990016 containerd[1609]: time="2026-01-28T01:48:41.989385273Z" level=info msg="connecting to shim b9d1d348cf0795ea248711c7ef2848f460514adfe68ff32870ab0b42bd3087c2" address="unix:///run/containerd/s/d09acdc242401520f6653fb2b4f019199fc6c2a2fe093660366a037d4b219284" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:48:42.100000 audit[3168]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3168 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:48:42.100000 audit[3168]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe17dd45c0 a2=0 a3=7ffe17dd45ac items=0 ppid=3078 pid=3168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:42.100000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 28 01:48:42.129000 audit[3174]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3174 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:48:42.129000 audit[3174]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe18bdeee0 a2=0 a3=7ffe18bdeecc items=0 ppid=3078 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:42.129000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 28 01:48:42.218000 audit[3187]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3187 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:48:42.218000 audit[3187]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffcb2970c40 a2=0 a3=7ffcb2970c2c items=0 ppid=3078 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:42.218000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 28 01:48:42.223000 audit[3195]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3195 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:48:42.223000 audit[3195]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc539abd0 a2=0 a3=7ffdc539abbc items=0 ppid=3078 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:42.223000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 28 01:48:42.239000 audit[3197]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3197 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:48:42.239000 audit[3197]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe24ec3d70 a2=0 a3=7ffe24ec3d5c items=0 ppid=3078 pid=3197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:42.239000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 28 01:48:42.257000 audit[3198]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3198 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:48:42.257000 audit[3198]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcb7c4a990 a2=0 a3=7ffcb7c4a97c items=0 ppid=3078 pid=3198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:42.257000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 28 01:48:42.268228 systemd[1]: Started cri-containerd-b9d1d348cf0795ea248711c7ef2848f460514adfe68ff32870ab0b42bd3087c2.scope - libcontainer container b9d1d348cf0795ea248711c7ef2848f460514adfe68ff32870ab0b42bd3087c2. Jan 28 01:48:42.277260 kubelet[2967]: E0128 01:48:42.277224 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:42.298000 audit[3202]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3202 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:48:42.298000 audit[3202]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdf6241d20 a2=0 a3=7ffdf6241d0c items=0 ppid=3078 pid=3202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:42.298000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 28 01:48:42.331000 audit[3210]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3210 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:48:42.331000 audit[3210]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd0208acb0 a2=0 a3=7ffd0208ac9c items=0 ppid=3078 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:42.331000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 28 01:48:42.338000 audit[3211]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3211 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:48:42.338000 audit[3211]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcb5da42a0 a2=0 a3=7ffcb5da428c items=0 ppid=3078 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:42.338000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 28 01:48:42.381000 audit: BPF prog-id=141 op=LOAD Jan 28 01:48:42.414000 audit: BPF prog-id=142 op=LOAD Jan 28 01:48:42.414000 audit[3180]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fa238 a2=98 a3=0 items=0 ppid=3164 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:42.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239643164333438636630373935656132343837313163376566323834 Jan 28 01:48:42.414000 audit: BPF prog-id=142 op=UNLOAD Jan 28 01:48:42.414000 audit[3180]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3164 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:42.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239643164333438636630373935656132343837313163376566323834 Jan 28 01:48:42.414000 audit: BPF prog-id=143 op=LOAD Jan 28 01:48:42.414000 audit[3180]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fa488 a2=98 a3=0 items=0 ppid=3164 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:42.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239643164333438636630373935656132343837313163376566323834 Jan 28 01:48:42.414000 audit: BPF prog-id=144 op=LOAD Jan 28 01:48:42.414000 audit[3180]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001fa218 a2=98 a3=0 items=0 ppid=3164 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:42.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239643164333438636630373935656132343837313163376566323834 Jan 28 01:48:42.414000 audit: BPF prog-id=144 op=UNLOAD Jan 28 01:48:42.414000 audit[3180]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3164 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:42.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239643164333438636630373935656132343837313163376566323834 Jan 28 01:48:42.414000 audit: BPF prog-id=143 op=UNLOAD Jan 28 01:48:42.414000 audit[3180]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3164 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:42.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239643164333438636630373935656132343837313163376566323834 Jan 28 01:48:42.414000 audit: BPF prog-id=145 op=LOAD Jan 28 01:48:42.414000 audit[3180]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fa6e8 a2=98 a3=0 items=0 ppid=3164 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:42.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239643164333438636630373935656132343837313163376566323834 Jan 28 01:48:42.450000 audit[3213]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3213 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:48:42.450000 audit[3213]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffff8d87320 a2=0 a3=7ffff8d8730c items=0 ppid=3078 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:42.450000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 28 01:48:42.470000 audit[3214]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3214 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:48:42.470000 audit[3214]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcc494df20 a2=0 a3=7ffcc494df0c items=0 ppid=3078 pid=3214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:42.470000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 28 01:48:42.509000 audit[3216]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3216 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:48:42.509000 audit[3216]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd759f2ee0 a2=0 a3=7ffd759f2ecc items=0 ppid=3078 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:42.509000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 28 01:48:42.543000 audit[3219]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3219 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:48:42.543000 audit[3219]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdca902b50 a2=0 a3=7ffdca902b3c items=0 ppid=3078 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:42.543000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 28 01:48:42.567847 kubelet[2967]: E0128 01:48:42.563011 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:42.735000 audit[3223]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3223 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:48:42.761785 kernel: kauditd_printk_skb: 106 callbacks suppressed Jan 28 01:48:42.762395 kernel: audit: type=1325 audit(1769564922.735:494): table=filter:73 family=2 entries=1 op=nft_register_rule pid=3223 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:48:42.735000 audit[3223]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff06b31740 a2=0 a3=7fff06b3172c items=0 ppid=3078 pid=3223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:42.823643 kernel: audit: type=1300 audit(1769564922.735:494): arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff06b31740 a2=0 a3=7fff06b3172c items=0 ppid=3078 pid=3223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:42.735000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 28 01:48:42.840326 kernel: audit: type=1327 audit(1769564922.735:494): proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 28 01:48:42.844784 kernel: audit: type=1325 audit(1769564922.825:495): table=nat:74 family=2 entries=1 op=nft_register_chain pid=3229 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:48:42.825000 audit[3229]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3229 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:48:42.882550 kernel: audit: type=1300 audit(1769564922.825:495): arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffeffe22f80 a2=0 a3=7ffeffe22f6c items=0 ppid=3078 pid=3229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:42.825000 audit[3229]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffeffe22f80 a2=0 a3=7ffeffe22f6c items=0 ppid=3078 pid=3229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:42.825000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 28 01:48:42.852000 audit[3231]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:48:42.936433 kernel: audit: type=1327 audit(1769564922.825:495): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 28 01:48:42.959834 kernel: audit: type=1325 audit(1769564922.852:496): table=nat:75 family=2 entries=1 op=nft_register_rule pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:48:42.959879 kernel: audit: type=1300 audit(1769564922.852:496): arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc163cb8d0 a2=0 a3=7ffc163cb8bc items=0 ppid=3078 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:42.852000 audit[3231]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc163cb8d0 a2=0 a3=7ffc163cb8bc items=0 ppid=3078 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:42.960031 containerd[1609]: time="2026-01-28T01:48:42.941976966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-25ww8,Uid:b931dda2-4f4f-40f1-a4a9-4f772efe9eb9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b9d1d348cf0795ea248711c7ef2848f460514adfe68ff32870ab0b42bd3087c2\"" Jan 28 01:48:42.985414 kernel: audit: type=1327 audit(1769564922.852:496): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 01:48:42.852000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 01:48:43.001248 containerd[1609]: time="2026-01-28T01:48:43.000864695Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 28 01:48:43.176000 audit[3234]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3234 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:48:43.191571 kernel: audit: type=1325 audit(1769564923.176:497): table=nat:76 family=2 entries=1 op=nft_register_rule pid=3234 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:48:43.176000 audit[3234]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffda4fb4f80 a2=0 a3=7ffda4fb4f6c items=0 ppid=3078 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:43.176000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 01:48:43.198000 audit[3235]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3235 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:48:43.198000 audit[3235]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd7b8ed590 a2=0 a3=7ffd7b8ed57c items=0 ppid=3078 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:43.198000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 28 01:48:43.229000 audit[3237]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3237 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:48:43.229000 audit[3237]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffe69ada290 a2=0 a3=7ffe69ada27c items=0 ppid=3078 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:43.229000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 28 01:48:43.370911 kubelet[2967]: E0128 01:48:43.370806 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:43.460000 audit[3243]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3243 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:48:43.460000 audit[3243]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffeca48fff0 a2=0 a3=7ffeca48ffdc items=0 ppid=3078 pid=3243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:43.460000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:48:43.575000 audit[3243]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3243 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:48:43.575000 audit[3243]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffeca48fff0 a2=0 a3=7ffeca48ffdc items=0 ppid=3078 pid=3243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:43.575000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:48:43.604000 audit[3248]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3248 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:48:43.604000 audit[3248]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc88f3d790 a2=0 a3=7ffc88f3d77c items=0 ppid=3078 pid=3248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:43.604000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 28 01:48:43.623283 kubelet[2967]: E0128 01:48:43.621799 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:43.669000 audit[3250]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3250 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:48:43.669000 audit[3250]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd071e3c70 a2=0 a3=7ffd071e3c5c items=0 ppid=3078 pid=3250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:43.669000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 28 01:48:43.778000 audit[3253]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3253 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:48:43.778000 audit[3253]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe991c31b0 a2=0 a3=7ffe991c319c items=0 ppid=3078 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:43.778000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 28 01:48:43.792000 audit[3254]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3254 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:48:43.792000 audit[3254]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff3196c100 a2=0 a3=7fff3196c0ec items=0 ppid=3078 pid=3254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:43.792000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 28 01:48:43.914000 audit[3256]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3256 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:48:43.914000 audit[3256]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffce6980480 a2=0 a3=7ffce698046c items=0 ppid=3078 pid=3256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:43.914000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 28 01:48:43.925000 audit[3257]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3257 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:48:43.925000 audit[3257]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffedbd42940 a2=0 a3=7ffedbd4292c items=0 ppid=3078 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:43.925000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 28 01:48:43.985000 audit[3259]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3259 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:48:43.985000 audit[3259]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffca91be780 a2=0 a3=7ffca91be76c items=0 ppid=3078 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:43.985000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 28 01:48:44.037000 audit[3262]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3262 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:48:44.037000 audit[3262]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffdce3d4450 a2=0 a3=7ffdce3d443c items=0 ppid=3078 pid=3262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:44.037000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 28 01:48:44.047000 audit[3263]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3263 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:48:44.047000 audit[3263]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe2ef3bad0 a2=0 a3=7ffe2ef3babc items=0 ppid=3078 pid=3263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:44.047000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 28 01:48:44.073000 audit[3265]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3265 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:48:44.073000 audit[3265]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc4fd306a0 a2=0 a3=7ffc4fd3068c items=0 ppid=3078 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:44.073000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 28 01:48:44.104000 audit[3266]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3266 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:48:44.104000 audit[3266]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc691b46f0 a2=0 a3=7ffc691b46dc items=0 ppid=3078 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:44.104000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 28 01:48:44.146000 audit[3268]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3268 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:48:44.146000 audit[3268]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff42005660 a2=0 a3=7fff4200564c items=0 ppid=3078 pid=3268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:44.146000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 28 01:48:44.221000 audit[3271]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3271 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:48:44.221000 audit[3271]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe84a4a450 a2=0 a3=7ffe84a4a43c items=0 ppid=3078 pid=3271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:44.221000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 28 01:48:44.298000 audit[3274]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3274 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:48:44.298000 audit[3274]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcd90816f0 a2=0 a3=7ffcd90816dc items=0 ppid=3078 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:44.298000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 28 01:48:44.303000 audit[3275]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3275 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:48:44.303000 audit[3275]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff99cce860 a2=0 a3=7fff99cce84c items=0 ppid=3078 pid=3275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:44.303000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 28 01:48:44.323000 audit[3277]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3277 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:48:44.323000 audit[3277]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff24633a90 a2=0 a3=7fff24633a7c items=0 ppid=3078 pid=3277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:44.323000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 01:48:44.386000 audit[3280]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3280 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:48:44.386000 audit[3280]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffccddc8990 a2=0 a3=7ffccddc897c items=0 ppid=3078 pid=3280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:44.386000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 01:48:44.398000 audit[3281]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3281 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:48:44.398000 audit[3281]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcba195580 a2=0 a3=7ffcba19556c items=0 ppid=3078 pid=3281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:44.398000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 28 01:48:44.422000 audit[3283]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3283 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:48:44.422000 audit[3283]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc16f413d0 a2=0 a3=7ffc16f413bc items=0 ppid=3078 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:44.422000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 28 01:48:44.676000 audit[3284]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3284 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:48:44.676000 audit[3284]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe2dfeec70 a2=0 a3=7ffe2dfeec5c items=0 ppid=3078 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:44.676000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 28 01:48:44.817000 audit[3286]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3286 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:48:44.817000 audit[3286]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff3d1ff540 a2=0 a3=7fff3d1ff52c items=0 ppid=3078 pid=3286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:44.817000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 01:48:44.943000 audit[3289]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3289 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:48:44.943000 audit[3289]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff8084e930 a2=0 a3=7fff8084e91c items=0 ppid=3078 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:44.943000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 01:48:45.068000 audit[3295]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3295 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 28 01:48:45.068000 audit[3295]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffd040be0c0 a2=0 a3=7ffd040be0ac items=0 ppid=3078 pid=3295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:45.068000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:48:45.077000 audit[3295]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3295 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 28 01:48:45.077000 audit[3295]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffd040be0c0 a2=0 a3=7ffd040be0ac items=0 ppid=3078 pid=3295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:48:45.077000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:48:45.650064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount408108219.mount: Deactivated successfully. Jan 28 01:49:01.388007 containerd[1609]: time="2026-01-28T01:49:01.387894084Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:49:01.400392 containerd[1609]: time="2026-01-28T01:49:01.400061413Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 28 01:49:01.404925 containerd[1609]: time="2026-01-28T01:49:01.403568349Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:49:01.409750 containerd[1609]: time="2026-01-28T01:49:01.409522124Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:49:01.412825 containerd[1609]: time="2026-01-28T01:49:01.412128092Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 18.411160796s" Jan 28 01:49:01.412825 containerd[1609]: time="2026-01-28T01:49:01.412245189Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 28 01:49:01.424422 containerd[1609]: time="2026-01-28T01:49:01.424074184Z" level=info msg="CreateContainer within sandbox \"b9d1d348cf0795ea248711c7ef2848f460514adfe68ff32870ab0b42bd3087c2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 28 01:49:01.485310 containerd[1609]: time="2026-01-28T01:49:01.485047364Z" level=info msg="Container 0bb8b1bd5c821a57e0d0dcb49f9dcb87d6b4e86ef33da9e75d79784c9591c0a9: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:49:01.544900 containerd[1609]: time="2026-01-28T01:49:01.542301852Z" level=info msg="CreateContainer within sandbox \"b9d1d348cf0795ea248711c7ef2848f460514adfe68ff32870ab0b42bd3087c2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0bb8b1bd5c821a57e0d0dcb49f9dcb87d6b4e86ef33da9e75d79784c9591c0a9\"" Jan 28 01:49:01.556385 containerd[1609]: time="2026-01-28T01:49:01.555579511Z" level=info msg="StartContainer for \"0bb8b1bd5c821a57e0d0dcb49f9dcb87d6b4e86ef33da9e75d79784c9591c0a9\"" Jan 28 01:49:01.558746 containerd[1609]: time="2026-01-28T01:49:01.558035039Z" level=info msg="connecting to shim 0bb8b1bd5c821a57e0d0dcb49f9dcb87d6b4e86ef33da9e75d79784c9591c0a9" address="unix:///run/containerd/s/d09acdc242401520f6653fb2b4f019199fc6c2a2fe093660366a037d4b219284" protocol=ttrpc version=3 Jan 28 01:49:01.747301 systemd[1]: Started cri-containerd-0bb8b1bd5c821a57e0d0dcb49f9dcb87d6b4e86ef33da9e75d79784c9591c0a9.scope - libcontainer container 0bb8b1bd5c821a57e0d0dcb49f9dcb87d6b4e86ef33da9e75d79784c9591c0a9. Jan 28 01:49:02.036871 kernel: kauditd_printk_skb: 86 callbacks suppressed Jan 28 01:49:02.037145 kernel: audit: type=1334 audit(1769564942.011:526): prog-id=146 op=LOAD Jan 28 01:49:02.011000 audit: BPF prog-id=146 op=LOAD Jan 28 01:49:02.020000 audit: BPF prog-id=147 op=LOAD Jan 28 01:49:02.073982 kernel: audit: type=1334 audit(1769564942.020:527): prog-id=147 op=LOAD Jan 28 01:49:02.074118 kernel: audit: type=1300 audit(1769564942.020:527): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3164 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:49:02.020000 audit[3300]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3164 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:49:02.097397 kernel: audit: type=1327 audit(1769564942.020:527): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062623862316264356338323161353765306430646362343966396463 Jan 28 01:49:02.020000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062623862316264356338323161353765306430646362343966396463 Jan 28 01:49:02.020000 audit: BPF prog-id=147 op=UNLOAD Jan 28 01:49:02.172878 kernel: audit: type=1334 audit(1769564942.020:528): prog-id=147 op=UNLOAD Jan 28 01:49:02.020000 audit[3300]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3164 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:49:02.230955 kernel: audit: type=1300 audit(1769564942.020:528): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3164 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:49:02.240810 kernel: audit: type=1327 audit(1769564942.020:528): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062623862316264356338323161353765306430646362343966396463 Jan 28 01:49:02.020000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062623862316264356338323161353765306430646362343966396463 Jan 28 01:49:02.345441 kernel: audit: type=1334 audit(1769564942.020:529): prog-id=148 op=LOAD Jan 28 01:49:02.020000 audit: BPF prog-id=148 op=LOAD Jan 28 01:49:02.020000 audit[3300]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3164 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:49:02.396362 kernel: audit: type=1300 audit(1769564942.020:529): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3164 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:49:02.396504 kernel: audit: type=1327 audit(1769564942.020:529): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062623862316264356338323161353765306430646362343966396463 Jan 28 01:49:02.020000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062623862316264356338323161353765306430646362343966396463 Jan 28 01:49:02.020000 audit: BPF prog-id=149 op=LOAD Jan 28 01:49:02.020000 audit[3300]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3164 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:49:02.020000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062623862316264356338323161353765306430646362343966396463 Jan 28 01:49:02.020000 audit: BPF prog-id=149 op=UNLOAD Jan 28 01:49:02.020000 audit[3300]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3164 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:49:02.020000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062623862316264356338323161353765306430646362343966396463 Jan 28 01:49:02.020000 audit: BPF prog-id=148 op=UNLOAD Jan 28 01:49:02.020000 audit[3300]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3164 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:49:02.020000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062623862316264356338323161353765306430646362343966396463 Jan 28 01:49:02.020000 audit: BPF prog-id=150 op=LOAD Jan 28 01:49:02.020000 audit[3300]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3164 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:49:02.020000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062623862316264356338323161353765306430646362343966396463 Jan 28 01:49:02.485570 containerd[1609]: time="2026-01-28T01:49:02.484379042Z" level=info msg="StartContainer for \"0bb8b1bd5c821a57e0d0dcb49f9dcb87d6b4e86ef33da9e75d79784c9591c0a9\" returns successfully" Jan 28 01:49:03.110224 kubelet[2967]: I0128 01:49:03.109934 2967 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-25ww8" podStartSLOduration=4.695664441 podStartE2EDuration="23.109916653s" podCreationTimestamp="2026-01-28 01:48:40 +0000 UTC" firstStartedPulling="2026-01-28 01:48:42.999598618 +0000 UTC m=+9.604510288" lastFinishedPulling="2026-01-28 01:49:01.413850831 +0000 UTC m=+28.018762500" observedRunningTime="2026-01-28 01:49:03.103165288 +0000 UTC m=+29.708076968" watchObservedRunningTime="2026-01-28 01:49:03.109916653 +0000 UTC m=+29.714828352" Jan 28 01:49:28.409366 sudo[1816]: pam_unix(sudo:session): session closed for user root Jan 28 01:49:28.433001 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 28 01:49:28.433171 kernel: audit: type=1106 audit(1769564968.408:534): pid=1816 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:49:28.408000 audit[1816]: USER_END pid=1816 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:49:28.501926 kernel: audit: type=1104 audit(1769564968.408:535): pid=1816 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:49:28.408000 audit[1816]: CRED_DISP pid=1816 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:49:28.522488 sshd[1815]: Connection closed by 10.0.0.1 port 42794 Jan 28 01:49:28.537736 sshd-session[1811]: pam_unix(sshd:session): session closed for user core Jan 28 01:49:28.549000 audit[1811]: USER_END pid=1811 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:49:28.572284 systemd[1]: sshd@6-10.0.0.85:22-10.0.0.1:42794.service: Deactivated successfully. Jan 28 01:49:28.549000 audit[1811]: CRED_DISP pid=1811 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:49:28.661893 systemd[1]: session-8.scope: Deactivated successfully. Jan 28 01:49:28.671076 systemd[1]: session-8.scope: Consumed 17.044s CPU time, 215.3M memory peak. Jan 28 01:49:28.711797 kernel: audit: type=1106 audit(1769564968.549:536): pid=1811 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:49:28.711940 kernel: audit: type=1104 audit(1769564968.549:537): pid=1811 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:49:28.711981 kernel: audit: type=1131 audit(1769564968.571:538): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.85:22-10.0.0.1:42794 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:49:28.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.85:22-10.0.0.1:42794 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:49:28.716582 systemd-logind[1586]: Session 8 logged out. Waiting for processes to exit. Jan 28 01:49:28.793901 systemd-logind[1586]: Removed session 8. Jan 28 01:49:33.905000 audit[3397]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3397 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:49:33.939592 kernel: audit: type=1325 audit(1769564973.905:539): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3397 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:49:33.905000 audit[3397]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc69b64e30 a2=0 a3=7ffc69b64e1c items=0 ppid=3078 pid=3397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:49:33.985585 kernel: audit: type=1300 audit(1769564973.905:539): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc69b64e30 a2=0 a3=7ffc69b64e1c items=0 ppid=3078 pid=3397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:49:33.905000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:49:34.002389 kernel: audit: type=1327 audit(1769564973.905:539): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:49:34.002498 kernel: audit: type=1325 audit(1769564973.985:540): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3397 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:49:33.985000 audit[3397]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3397 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:49:34.062788 kernel: audit: type=1300 audit(1769564973.985:540): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc69b64e30 a2=0 a3=0 items=0 ppid=3078 pid=3397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:49:33.985000 audit[3397]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc69b64e30 a2=0 a3=0 items=0 ppid=3078 pid=3397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:49:33.985000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:49:34.083815 kernel: audit: type=1327 audit(1769564973.985:540): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:49:34.107000 audit[3399]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3399 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:49:34.131755 kernel: audit: type=1325 audit(1769564974.107:541): table=filter:107 family=2 entries=16 op=nft_register_rule pid=3399 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:49:34.107000 audit[3399]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd174dc650 a2=0 a3=7ffd174dc63c items=0 ppid=3078 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:49:34.173767 kernel: audit: type=1300 audit(1769564974.107:541): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd174dc650 a2=0 a3=7ffd174dc63c items=0 ppid=3078 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:49:34.107000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:49:34.224860 kernel: audit: type=1327 audit(1769564974.107:541): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:49:34.307091 kernel: audit: type=1325 audit(1769564974.267:542): table=nat:108 family=2 entries=12 op=nft_register_rule pid=3399 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:49:34.267000 audit[3399]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3399 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:49:34.267000 audit[3399]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd174dc650 a2=0 a3=0 items=0 ppid=3078 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:49:34.267000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:49:47.162388 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 28 01:49:47.162636 kernel: audit: type=1325 audit(1769564987.117:543): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3406 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:49:47.117000 audit[3406]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3406 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:49:47.117000 audit[3406]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc44a528e0 a2=0 a3=7ffc44a528cc items=0 ppid=3078 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:49:47.276365 kernel: audit: type=1300 audit(1769564987.117:543): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc44a528e0 a2=0 a3=7ffc44a528cc items=0 ppid=3078 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:49:47.276529 kernel: audit: type=1327 audit(1769564987.117:543): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:49:47.117000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:49:47.320000 audit[3406]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3406 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:49:47.410446 kernel: audit: type=1325 audit(1769564987.320:544): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3406 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:49:47.416215 kernel: audit: type=1300 audit(1769564987.320:544): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc44a528e0 a2=0 a3=0 items=0 ppid=3078 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:49:47.320000 audit[3406]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc44a528e0 a2=0 a3=0 items=0 ppid=3078 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:49:47.320000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:49:47.462385 kernel: audit: type=1327 audit(1769564987.320:544): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:49:47.684000 audit[3408]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3408 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:49:47.712513 kernel: audit: type=1325 audit(1769564987.684:545): table=filter:111 family=2 entries=19 op=nft_register_rule pid=3408 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:49:47.684000 audit[3408]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcb4e7a960 a2=0 a3=7ffcb4e7a94c items=0 ppid=3078 pid=3408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:49:47.760947 kernel: audit: type=1300 audit(1769564987.684:545): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcb4e7a960 a2=0 a3=7ffcb4e7a94c items=0 ppid=3078 pid=3408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:49:47.761016 kernel: audit: type=1327 audit(1769564987.684:545): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:49:47.684000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:49:47.693000 audit[3408]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3408 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:49:47.797196 kernel: audit: type=1325 audit(1769564987.693:546): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3408 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:49:47.693000 audit[3408]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcb4e7a960 a2=0 a3=0 items=0 ppid=3078 pid=3408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:49:47.693000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:49:52.197878 kubelet[2967]: E0128 01:49:52.196993 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:49:54.202836 kubelet[2967]: E0128 01:49:54.200244 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:49:58.303622 kubelet[2967]: E0128 01:49:58.302466 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:49:59.927000 audit[3411]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3411 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:49:59.946607 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 28 01:49:59.947092 kernel: audit: type=1325 audit(1769564999.927:547): table=filter:113 family=2 entries=21 op=nft_register_rule pid=3411 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:49:59.987212 kernel: audit: type=1300 audit(1769564999.927:547): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc53182500 a2=0 a3=7ffc531824ec items=0 ppid=3078 pid=3411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:49:59.927000 audit[3411]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc53182500 a2=0 a3=7ffc531824ec items=0 ppid=3078 pid=3411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:49:59.927000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:50:00.066778 kernel: audit: type=1327 audit(1769564999.927:547): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:49:59.986000 audit[3411]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3411 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:50:00.192368 kernel: audit: type=1325 audit(1769564999.986:548): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3411 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:50:00.192512 kernel: audit: type=1300 audit(1769564999.986:548): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc53182500 a2=0 a3=0 items=0 ppid=3078 pid=3411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:49:59.986000 audit[3411]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc53182500 a2=0 a3=0 items=0 ppid=3078 pid=3411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:49:59.986000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:50:00.205771 kernel: audit: type=1327 audit(1769564999.986:548): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:50:00.229802 kubelet[2967]: E0128 01:50:00.224137 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:50:00.278000 audit[3413]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=3413 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:50:00.337385 kernel: audit: type=1325 audit(1769565000.278:549): table=filter:115 family=2 entries=22 op=nft_register_rule pid=3413 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:50:00.278000 audit[3413]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc5ca7beb0 a2=0 a3=7ffc5ca7be9c items=0 ppid=3078 pid=3413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:50:00.365266 systemd[1]: Created slice kubepods-besteffort-pod70b124fc_bfda_4281_a3a4_8215ce0f6877.slice - libcontainer container kubepods-besteffort-pod70b124fc_bfda_4281_a3a4_8215ce0f6877.slice. Jan 28 01:50:00.278000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:50:00.433965 kernel: audit: type=1300 audit(1769565000.278:549): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc5ca7beb0 a2=0 a3=7ffc5ca7be9c items=0 ppid=3078 pid=3413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:50:00.434075 kernel: audit: type=1327 audit(1769565000.278:549): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:50:00.434117 kernel: audit: type=1325 audit(1769565000.336:550): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3413 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:50:00.336000 audit[3413]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3413 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:50:00.336000 audit[3413]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc5ca7beb0 a2=0 a3=0 items=0 ppid=3078 pid=3413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:50:00.336000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:50:00.494497 kubelet[2967]: I0128 01:50:00.494463 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70b124fc-bfda-4281-a3a4-8215ce0f6877-tigera-ca-bundle\") pod \"calico-typha-c858fb87c-l4q42\" (UID: \"70b124fc-bfda-4281-a3a4-8215ce0f6877\") " pod="calico-system/calico-typha-c858fb87c-l4q42" Jan 28 01:50:00.497172 kubelet[2967]: I0128 01:50:00.497067 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/70b124fc-bfda-4281-a3a4-8215ce0f6877-typha-certs\") pod \"calico-typha-c858fb87c-l4q42\" (UID: \"70b124fc-bfda-4281-a3a4-8215ce0f6877\") " pod="calico-system/calico-typha-c858fb87c-l4q42" Jan 28 01:50:00.497172 kubelet[2967]: I0128 01:50:00.497125 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl5gb\" (UniqueName: \"kubernetes.io/projected/70b124fc-bfda-4281-a3a4-8215ce0f6877-kube-api-access-cl5gb\") pod \"calico-typha-c858fb87c-l4q42\" (UID: \"70b124fc-bfda-4281-a3a4-8215ce0f6877\") " pod="calico-system/calico-typha-c858fb87c-l4q42" Jan 28 01:50:01.778854 kubelet[2967]: E0128 01:50:01.770985 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:50:02.461118 containerd[1609]: time="2026-01-28T01:50:02.329599495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c858fb87c-l4q42,Uid:70b124fc-bfda-4281-a3a4-8215ce0f6877,Namespace:calico-system,Attempt:0,}" Jan 28 01:50:10.816494 systemd[1713]: Created slice background.slice - User Background Tasks Slice. Jan 28 01:50:10.820902 systemd[1713]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Jan 28 01:50:11.024993 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 28 01:50:11.026839 kernel: audit: type=1325 audit(1769565010.985:551): table=filter:117 family=2 entries=22 op=nft_register_rule pid=3419 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:50:10.985000 audit[3419]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3419 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:50:11.020077 systemd[1713]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Jan 28 01:50:11.170260 kernel: audit: type=1300 audit(1769565010.985:551): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff3c0f6410 a2=0 a3=7fff3c0f63fc items=0 ppid=3078 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:50:10.985000 audit[3419]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff3c0f6410 a2=0 a3=7fff3c0f63fc items=0 ppid=3078 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:50:10.985000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:50:11.185933 systemd[1]: cri-containerd-0bb8b1bd5c821a57e0d0dcb49f9dcb87d6b4e86ef33da9e75d79784c9591c0a9.scope: Deactivated successfully. Jan 28 01:50:11.215591 kernel: audit: type=1327 audit(1769565010.985:551): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:50:11.189056 systemd[1]: cri-containerd-0bb8b1bd5c821a57e0d0dcb49f9dcb87d6b4e86ef33da9e75d79784c9591c0a9.scope: Consumed 14.885s CPU time, 80.1M memory peak. Jan 28 01:50:11.216629 kubelet[2967]: E0128 01:50:11.216591 2967 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.951s" Jan 28 01:50:11.275221 kernel: audit: type=1325 audit(1769565011.199:552): table=nat:118 family=2 entries=12 op=nft_register_rule pid=3419 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:50:11.387053 kernel: audit: type=1300 audit(1769565011.199:552): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff3c0f6410 a2=0 a3=0 items=0 ppid=3078 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:50:11.199000 audit[3419]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3419 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:50:11.199000 audit[3419]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff3c0f6410 a2=0 a3=0 items=0 ppid=3078 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:50:11.391611 containerd[1609]: time="2026-01-28T01:50:11.323258043Z" level=info msg="connecting to shim 97307db3a56847c2b3ea5411d14db48f22041ad8fd6281809277fb982b642a33" address="unix:///run/containerd/s/eb9ed6c95f4bde7ee5c2c97acdb23d311f37572a0de9e9a33c0a4f6194b30762" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:50:11.391611 containerd[1609]: time="2026-01-28T01:50:11.325929853Z" level=info msg="received container exit event container_id:\"0bb8b1bd5c821a57e0d0dcb49f9dcb87d6b4e86ef33da9e75d79784c9591c0a9\" id:\"0bb8b1bd5c821a57e0d0dcb49f9dcb87d6b4e86ef33da9e75d79784c9591c0a9\" pid:3315 exit_status:1 exited_at:{seconds:1769565011 nanos:242020448}" Jan 28 01:50:11.407425 kernel: audit: type=1327 audit(1769565011.199:552): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:50:11.199000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:50:11.199000 audit: BPF prog-id=146 op=UNLOAD Jan 28 01:50:11.485320 kernel: audit: type=1334 audit(1769565011.199:553): prog-id=146 op=UNLOAD Jan 28 01:50:11.485615 kernel: audit: type=1334 audit(1769565011.199:554): prog-id=150 op=UNLOAD Jan 28 01:50:11.199000 audit: BPF prog-id=150 op=UNLOAD Jan 28 01:50:11.905913 systemd[1]: Started cri-containerd-97307db3a56847c2b3ea5411d14db48f22041ad8fd6281809277fb982b642a33.scope - libcontainer container 97307db3a56847c2b3ea5411d14db48f22041ad8fd6281809277fb982b642a33. Jan 28 01:50:12.172000 audit: BPF prog-id=151 op=LOAD Jan 28 01:50:12.199615 kernel: audit: type=1334 audit(1769565012.172:555): prog-id=151 op=LOAD Jan 28 01:50:12.199822 kernel: audit: type=1334 audit(1769565012.180:556): prog-id=152 op=LOAD Jan 28 01:50:12.180000 audit: BPF prog-id=152 op=LOAD Jan 28 01:50:12.180000 audit[3440]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=3429 pid=3440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:50:12.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937333037646233613536383437633262336561353431316431346462 Jan 28 01:50:12.180000 audit: BPF prog-id=152 op=UNLOAD Jan 28 01:50:12.180000 audit[3440]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3429 pid=3440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:50:12.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937333037646233613536383437633262336561353431316431346462 Jan 28 01:50:12.192000 audit: BPF prog-id=153 op=LOAD Jan 28 01:50:12.192000 audit[3440]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=3429 pid=3440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:50:12.192000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937333037646233613536383437633262336561353431316431346462 Jan 28 01:50:12.192000 audit: BPF prog-id=154 op=LOAD Jan 28 01:50:12.192000 audit[3440]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=3429 pid=3440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:50:12.192000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937333037646233613536383437633262336561353431316431346462 Jan 28 01:50:12.192000 audit: BPF prog-id=154 op=UNLOAD Jan 28 01:50:12.192000 audit[3440]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3429 pid=3440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:50:12.192000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937333037646233613536383437633262336561353431316431346462 Jan 28 01:50:12.192000 audit: BPF prog-id=153 op=UNLOAD Jan 28 01:50:12.192000 audit[3440]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3429 pid=3440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:50:12.192000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937333037646233613536383437633262336561353431316431346462 Jan 28 01:50:12.192000 audit: BPF prog-id=155 op=LOAD Jan 28 01:50:12.192000 audit[3440]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=3429 pid=3440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:50:12.192000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937333037646233613536383437633262336561353431316431346462 Jan 28 01:50:12.320915 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0bb8b1bd5c821a57e0d0dcb49f9dcb87d6b4e86ef33da9e75d79784c9591c0a9-rootfs.mount: Deactivated successfully. Jan 28 01:50:12.722020 containerd[1609]: time="2026-01-28T01:50:12.721894688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c858fb87c-l4q42,Uid:70b124fc-bfda-4281-a3a4-8215ce0f6877,Namespace:calico-system,Attempt:0,} returns sandbox id \"97307db3a56847c2b3ea5411d14db48f22041ad8fd6281809277fb982b642a33\"" Jan 28 01:50:12.724156 kubelet[2967]: E0128 01:50:12.724069 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:50:12.727123 containerd[1609]: time="2026-01-28T01:50:12.726516679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 28 01:50:13.249907 kubelet[2967]: I0128 01:50:13.249103 2967 scope.go:117] "RemoveContainer" containerID="0bb8b1bd5c821a57e0d0dcb49f9dcb87d6b4e86ef33da9e75d79784c9591c0a9" Jan 28 01:50:13.260253 containerd[1609]: time="2026-01-28T01:50:13.260200052Z" level=info msg="CreateContainer within sandbox \"b9d1d348cf0795ea248711c7ef2848f460514adfe68ff32870ab0b42bd3087c2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 28 01:50:13.379085 containerd[1609]: time="2026-01-28T01:50:13.378958078Z" level=info msg="Container 1982c49e22b40b664af3807286ae7acff0aed44e51ce169f153d57ff2c91bb26: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:50:13.457817 containerd[1609]: time="2026-01-28T01:50:13.434311517Z" level=info msg="CreateContainer within sandbox \"b9d1d348cf0795ea248711c7ef2848f460514adfe68ff32870ab0b42bd3087c2\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"1982c49e22b40b664af3807286ae7acff0aed44e51ce169f153d57ff2c91bb26\"" Jan 28 01:50:13.457817 containerd[1609]: time="2026-01-28T01:50:13.437822693Z" level=info msg="StartContainer for \"1982c49e22b40b664af3807286ae7acff0aed44e51ce169f153d57ff2c91bb26\"" Jan 28 01:50:13.457817 containerd[1609]: time="2026-01-28T01:50:13.439038351Z" level=info msg="connecting to shim 1982c49e22b40b664af3807286ae7acff0aed44e51ce169f153d57ff2c91bb26" address="unix:///run/containerd/s/d09acdc242401520f6653fb2b4f019199fc6c2a2fe093660366a037d4b219284" protocol=ttrpc version=3 Jan 28 01:50:13.692835 systemd[1]: Started cri-containerd-1982c49e22b40b664af3807286ae7acff0aed44e51ce169f153d57ff2c91bb26.scope - libcontainer container 1982c49e22b40b664af3807286ae7acff0aed44e51ce169f153d57ff2c91bb26. Jan 28 01:50:13.856000 audit: BPF prog-id=156 op=LOAD Jan 28 01:50:13.861000 audit: BPF prog-id=157 op=LOAD Jan 28 01:50:13.861000 audit[3478]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=3164 pid=3478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:50:13.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139383263343965323262343062363634616633383037323836616537 Jan 28 01:50:13.861000 audit: BPF prog-id=157 op=UNLOAD Jan 28 01:50:13.861000 audit[3478]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3164 pid=3478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:50:13.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139383263343965323262343062363634616633383037323836616537 Jan 28 01:50:13.861000 audit: BPF prog-id=158 op=LOAD Jan 28 01:50:13.861000 audit[3478]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=3164 pid=3478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:50:13.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139383263343965323262343062363634616633383037323836616537 Jan 28 01:50:13.862000 audit: BPF prog-id=159 op=LOAD Jan 28 01:50:13.862000 audit[3478]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=3164 pid=3478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:50:13.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139383263343965323262343062363634616633383037323836616537 Jan 28 01:50:13.862000 audit: BPF prog-id=159 op=UNLOAD Jan 28 01:50:13.862000 audit[3478]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3164 pid=3478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:50:13.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139383263343965323262343062363634616633383037323836616537 Jan 28 01:50:13.862000 audit: BPF prog-id=158 op=UNLOAD Jan 28 01:50:13.862000 audit[3478]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3164 pid=3478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:50:13.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139383263343965323262343062363634616633383037323836616537 Jan 28 01:50:13.862000 audit: BPF prog-id=160 op=LOAD Jan 28 01:50:13.862000 audit[3478]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=3164 pid=3478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:50:13.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139383263343965323262343062363634616633383037323836616537 Jan 28 01:50:14.675037 containerd[1609]: time="2026-01-28T01:50:14.669961477Z" level=info msg="StartContainer for \"1982c49e22b40b664af3807286ae7acff0aed44e51ce169f153d57ff2c91bb26\" returns successfully" Jan 28 01:50:15.413042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3631912954.mount: Deactivated successfully. Jan 28 01:50:24.246643 containerd[1609]: time="2026-01-28T01:50:24.246446066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:50:24.252794 containerd[1609]: time="2026-01-28T01:50:24.252618800Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35230631" Jan 28 01:50:24.262278 containerd[1609]: time="2026-01-28T01:50:24.261291526Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:50:24.280376 containerd[1609]: time="2026-01-28T01:50:24.280317684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:50:24.286131 containerd[1609]: time="2026-01-28T01:50:24.285857656Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 11.559303398s" Jan 28 01:50:24.286131 containerd[1609]: time="2026-01-28T01:50:24.285917188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 28 01:50:24.368589 containerd[1609]: time="2026-01-28T01:50:24.366578707Z" level=info msg="CreateContainer within sandbox \"97307db3a56847c2b3ea5411d14db48f22041ad8fd6281809277fb982b642a33\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 28 01:50:24.406340 containerd[1609]: time="2026-01-28T01:50:24.404827676Z" level=info msg="Container 009fdc883610089e19b9e1012855e2339327ea36befa6ded8508aac445515df2: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:50:24.438978 containerd[1609]: time="2026-01-28T01:50:24.438088867Z" level=info msg="CreateContainer within sandbox \"97307db3a56847c2b3ea5411d14db48f22041ad8fd6281809277fb982b642a33\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"009fdc883610089e19b9e1012855e2339327ea36befa6ded8508aac445515df2\"" Jan 28 01:50:24.442635 containerd[1609]: time="2026-01-28T01:50:24.440067296Z" level=info msg="StartContainer for \"009fdc883610089e19b9e1012855e2339327ea36befa6ded8508aac445515df2\"" Jan 28 01:50:24.443826 containerd[1609]: time="2026-01-28T01:50:24.443744826Z" level=info msg="connecting to shim 009fdc883610089e19b9e1012855e2339327ea36befa6ded8508aac445515df2" address="unix:///run/containerd/s/eb9ed6c95f4bde7ee5c2c97acdb23d311f37572a0de9e9a33c0a4f6194b30762" protocol=ttrpc version=3 Jan 28 01:50:24.545263 systemd[1]: Started cri-containerd-009fdc883610089e19b9e1012855e2339327ea36befa6ded8508aac445515df2.scope - libcontainer container 009fdc883610089e19b9e1012855e2339327ea36befa6ded8508aac445515df2. Jan 28 01:50:24.673000 audit: BPF prog-id=161 op=LOAD Jan 28 01:50:24.687919 kernel: kauditd_printk_skb: 42 callbacks suppressed Jan 28 01:50:24.688115 kernel: audit: type=1334 audit(1769565024.673:571): prog-id=161 op=LOAD Jan 28 01:50:24.687000 audit: BPF prog-id=162 op=LOAD Jan 28 01:50:24.708282 kernel: audit: type=1334 audit(1769565024.687:572): prog-id=162 op=LOAD Jan 28 01:50:24.708459 kernel: audit: type=1300 audit(1769565024.687:572): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3429 pid=3529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:50:24.687000 audit[3529]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3429 pid=3529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:50:24.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030396664633838333631303038396531396239653130313238353565 Jan 28 01:50:24.812004 kernel: audit: type=1327 audit(1769565024.687:572): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030396664633838333631303038396531396239653130313238353565 Jan 28 01:50:24.687000 audit: BPF prog-id=162 op=UNLOAD Jan 28 01:50:24.868644 kernel: audit: type=1334 audit(1769565024.687:573): prog-id=162 op=UNLOAD Jan 28 01:50:24.868845 kernel: audit: type=1300 audit(1769565024.687:573): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3429 pid=3529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:50:24.687000 audit[3529]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3429 pid=3529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:50:24.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030396664633838333631303038396531396239653130313238353565 Jan 28 01:50:24.947175 kernel: audit: type=1327 audit(1769565024.687:573): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030396664633838333631303038396531396239653130313238353565 Jan 28 01:50:24.687000 audit: BPF prog-id=163 op=LOAD Jan 28 01:50:24.978632 kernel: audit: type=1334 audit(1769565024.687:574): prog-id=163 op=LOAD Jan 28 01:50:24.978901 kernel: audit: type=1300 audit(1769565024.687:574): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3429 pid=3529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:50:24.687000 audit[3529]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3429 pid=3529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:50:24.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030396664633838333631303038396531396239653130313238353565 Jan 28 01:50:25.013776 kernel: audit: type=1327 audit(1769565024.687:574): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030396664633838333631303038396531396239653130313238353565 Jan 28 01:50:24.687000 audit: BPF prog-id=164 op=LOAD Jan 28 01:50:24.687000 audit[3529]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3429 pid=3529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:50:24.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030396664633838333631303038396531396239653130313238353565 Jan 28 01:50:24.687000 audit: BPF prog-id=164 op=UNLOAD Jan 28 01:50:24.687000 audit[3529]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3429 pid=3529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:50:24.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030396664633838333631303038396531396239653130313238353565 Jan 28 01:50:24.687000 audit: BPF prog-id=163 op=UNLOAD Jan 28 01:50:24.687000 audit[3529]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3429 pid=3529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:50:24.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030396664633838333631303038396531396239653130313238353565 Jan 28 01:50:24.687000 audit: BPF prog-id=165 op=LOAD Jan 28 01:50:24.687000 audit[3529]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3429 pid=3529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:50:24.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030396664633838333631303038396531396239653130313238353565 Jan 28 01:50:25.145627 containerd[1609]: time="2026-01-28T01:50:25.145406445Z" level=info msg="StartContainer for \"009fdc883610089e19b9e1012855e2339327ea36befa6ded8508aac445515df2\" returns successfully" Jan 28 01:50:25.882874 kubelet[2967]: E0128 01:50:25.880959 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:50:26.930557 kubelet[2967]: E0128 01:50:26.925418 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:50:27.075267 kubelet[2967]: I0128 01:50:27.072384 2967 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-c858fb87c-l4q42" podStartSLOduration=16.50997639 podStartE2EDuration="28.072367536s" podCreationTimestamp="2026-01-28 01:49:59 +0000 UTC" firstStartedPulling="2026-01-28 01:50:12.726183227 +0000 UTC m=+99.331094895" lastFinishedPulling="2026-01-28 01:50:24.288574372 +0000 UTC m=+110.893486041" observedRunningTime="2026-01-28 01:50:26.001414504 +0000 UTC m=+112.606326193" watchObservedRunningTime="2026-01-28 01:50:27.072367536 +0000 UTC m=+113.677279206" Jan 28 01:50:27.282000 audit[3569]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3569 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:50:27.282000 audit[3569]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe80683870 a2=0 a3=7ffe8068385c items=0 ppid=3078 pid=3569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:50:27.282000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:50:27.312000 audit[3569]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=3569 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:50:27.312000 audit[3569]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe80683870 a2=0 a3=7ffe8068385c items=0 ppid=3078 pid=3569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:50:27.312000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:50:27.939212 kubelet[2967]: E0128 01:50:27.938756 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:50:33.961956 kubelet[2967]: E0128 01:50:33.955150 2967 kubelet_node_status.go:460] "Node not becoming ready in time after startup" Jan 28 01:50:36.218370 kubelet[2967]: E0128 01:50:36.218286 2967 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 01:50:41.230067 kubelet[2967]: E0128 01:50:41.228869 2967 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 01:50:46.309061 kubelet[2967]: E0128 01:50:46.308150 2967 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 01:50:51.435592 kubelet[2967]: E0128 01:50:51.427826 2967 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 01:51:03.480810 kubelet[2967]: E0128 01:51:03.476474 2967 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 01:51:04.542226 systemd[1]: cri-containerd-2f10dc0975b1cd21acae00f371fed84998a86edf5382e1bd3d0830c0022baa2c.scope: Deactivated successfully. Jan 28 01:51:04.543619 systemd[1]: cri-containerd-2f10dc0975b1cd21acae00f371fed84998a86edf5382e1bd3d0830c0022baa2c.scope: Consumed 6.787s CPU time, 21.1M memory peak, 296K read from disk. Jan 28 01:51:04.598240 kernel: kauditd_printk_skb: 18 callbacks suppressed Jan 28 01:51:04.604614 kernel: audit: type=1334 audit(1769565064.581:581): prog-id=108 op=UNLOAD Jan 28 01:51:04.581000 audit: BPF prog-id=108 op=UNLOAD Jan 28 01:51:04.581000 audit: BPF prog-id=112 op=UNLOAD Jan 28 01:51:04.709989 kernel: audit: type=1334 audit(1769565064.581:582): prog-id=112 op=UNLOAD Jan 28 01:51:04.710240 kernel: audit: type=1334 audit(1769565064.593:583): prog-id=166 op=LOAD Jan 28 01:51:04.593000 audit: BPF prog-id=166 op=LOAD Jan 28 01:51:04.736367 kernel: audit: type=1334 audit(1769565064.603:584): prog-id=93 op=UNLOAD Jan 28 01:51:04.603000 audit: BPF prog-id=93 op=UNLOAD Jan 28 01:51:04.736964 kubelet[2967]: E0128 01:51:04.721146 2967 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.486s" Jan 28 01:51:04.776205 containerd[1609]: time="2026-01-28T01:51:04.776144638Z" level=info msg="received container exit event container_id:\"2f10dc0975b1cd21acae00f371fed84998a86edf5382e1bd3d0830c0022baa2c\" id:\"2f10dc0975b1cd21acae00f371fed84998a86edf5382e1bd3d0830c0022baa2c\" pid:2816 exit_status:1 exited_at:{seconds:1769565064 nanos:703336536}" Jan 28 01:51:04.801090 kubelet[2967]: E0128 01:51:04.794069 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:51:04.801090 kubelet[2967]: E0128 01:51:04.794144 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:51:04.801090 kubelet[2967]: E0128 01:51:04.794362 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:51:04.801090 kubelet[2967]: E0128 01:51:04.794861 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:51:05.743791 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f10dc0975b1cd21acae00f371fed84998a86edf5382e1bd3d0830c0022baa2c-rootfs.mount: Deactivated successfully. Jan 28 01:51:06.108959 kubelet[2967]: E0128 01:51:06.108786 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:51:07.193437 kubelet[2967]: I0128 01:51:07.185126 2967 scope.go:117] "RemoveContainer" containerID="2f10dc0975b1cd21acae00f371fed84998a86edf5382e1bd3d0830c0022baa2c" Jan 28 01:51:07.193437 kubelet[2967]: E0128 01:51:07.185495 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:51:07.490014 containerd[1609]: time="2026-01-28T01:51:07.489868394Z" level=info msg="CreateContainer within sandbox \"cd5552c8c00760aa608d4a148126f9c2f58d42c8737025f8bd979a2bf6fdf17e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 28 01:51:07.648191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount642184625.mount: Deactivated successfully. Jan 28 01:51:07.696119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4275418752.mount: Deactivated successfully. Jan 28 01:51:07.711222 containerd[1609]: time="2026-01-28T01:51:07.710180017Z" level=info msg="Container 618185ebd88995219edd740c485ef90d5784397e4a0beffe20265a36503b8516: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:51:07.803833 containerd[1609]: time="2026-01-28T01:51:07.795138905Z" level=info msg="CreateContainer within sandbox \"cd5552c8c00760aa608d4a148126f9c2f58d42c8737025f8bd979a2bf6fdf17e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"618185ebd88995219edd740c485ef90d5784397e4a0beffe20265a36503b8516\"" Jan 28 01:51:07.806038 containerd[1609]: time="2026-01-28T01:51:07.805916068Z" level=info msg="StartContainer for \"618185ebd88995219edd740c485ef90d5784397e4a0beffe20265a36503b8516\"" Jan 28 01:51:07.818108 containerd[1609]: time="2026-01-28T01:51:07.817942351Z" level=info msg="connecting to shim 618185ebd88995219edd740c485ef90d5784397e4a0beffe20265a36503b8516" address="unix:///run/containerd/s/004f2f3be3ea3e03f58e057bbaa5bebeda8a7e17fab2acdd2a25a710ace625a7" protocol=ttrpc version=3 Jan 28 01:51:08.132972 systemd[1]: Started cri-containerd-618185ebd88995219edd740c485ef90d5784397e4a0beffe20265a36503b8516.scope - libcontainer container 618185ebd88995219edd740c485ef90d5784397e4a0beffe20265a36503b8516. Jan 28 01:51:08.409000 audit: BPF prog-id=167 op=LOAD Jan 28 01:51:08.433891 kernel: audit: type=1334 audit(1769565068.409:585): prog-id=167 op=LOAD Jan 28 01:51:08.409000 audit: BPF prog-id=168 op=LOAD Jan 28 01:51:08.488624 kernel: audit: type=1334 audit(1769565068.409:586): prog-id=168 op=LOAD Jan 28 01:51:08.409000 audit[3589]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001e0238 a2=98 a3=0 items=0 ppid=2661 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:51:08.556358 kubelet[2967]: E0128 01:51:08.494650 2967 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 01:51:08.409000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631383138356562643838393935323139656464373430633438356566 Jan 28 01:51:08.616613 kernel: audit: type=1300 audit(1769565068.409:586): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001e0238 a2=98 a3=0 items=0 ppid=2661 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:51:08.616894 kernel: audit: type=1327 audit(1769565068.409:586): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631383138356562643838393935323139656464373430633438356566 Jan 28 01:51:08.409000 audit: BPF prog-id=168 op=UNLOAD Jan 28 01:51:08.672324 kernel: audit: type=1334 audit(1769565068.409:587): prog-id=168 op=UNLOAD Jan 28 01:51:08.672483 kernel: audit: type=1300 audit(1769565068.409:587): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2661 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:51:08.409000 audit[3589]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2661 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:51:08.409000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631383138356562643838393935323139656464373430633438356566 Jan 28 01:51:08.409000 audit: BPF prog-id=169 op=LOAD Jan 28 01:51:08.409000 audit[3589]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001e0488 a2=98 a3=0 items=0 ppid=2661 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:51:08.409000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631383138356562643838393935323139656464373430633438356566 Jan 28 01:51:08.409000 audit: BPF prog-id=170 op=LOAD Jan 28 01:51:08.409000 audit[3589]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001e0218 a2=98 a3=0 items=0 ppid=2661 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:51:08.409000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631383138356562643838393935323139656464373430633438356566 Jan 28 01:51:08.427000 audit: BPF prog-id=170 op=UNLOAD Jan 28 01:51:08.427000 audit[3589]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2661 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:51:08.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631383138356562643838393935323139656464373430633438356566 Jan 28 01:51:08.427000 audit: BPF prog-id=169 op=UNLOAD Jan 28 01:51:08.427000 audit[3589]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2661 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:51:08.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631383138356562643838393935323139656464373430633438356566 Jan 28 01:51:08.427000 audit: BPF prog-id=171 op=LOAD Jan 28 01:51:08.427000 audit[3589]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001e06e8 a2=98 a3=0 items=0 ppid=2661 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:51:08.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631383138356562643838393935323139656464373430633438356566 Jan 28 01:51:08.834113 containerd[1609]: time="2026-01-28T01:51:08.833811156Z" level=info msg="StartContainer for \"618185ebd88995219edd740c485ef90d5784397e4a0beffe20265a36503b8516\" returns successfully" Jan 28 01:51:09.302947 kubelet[2967]: E0128 01:51:09.302418 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:51:10.308249 kubelet[2967]: E0128 01:51:10.307885 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:51:13.216478 kubelet[2967]: E0128 01:51:13.216286 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:51:13.228990 kubelet[2967]: E0128 01:51:13.226856 2967 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"cni-config\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"cni-config\"" type="*v1.ConfigMap" Jan 28 01:51:13.228990 kubelet[2967]: E0128 01:51:13.227586 2967 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"node-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"node-certs\"" type="*v1.Secret" Jan 28 01:51:13.243365 systemd[1]: Created slice kubepods-besteffort-pod606193a6_82d3_4faa_a3ef_4bde79cd518b.slice - libcontainer container kubepods-besteffort-pod606193a6_82d3_4faa_a3ef_4bde79cd518b.slice. Jan 28 01:51:13.258876 kubelet[2967]: I0128 01:51:13.258376 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d33e070d-1851-4242-98ee-97e68b203245-varrun\") pod \"csi-node-driver-ms9md\" (UID: \"d33e070d-1851-4242-98ee-97e68b203245\") " pod="calico-system/csi-node-driver-ms9md" Jan 28 01:51:13.260043 kubelet[2967]: I0128 01:51:13.260015 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-882zm\" (UniqueName: \"kubernetes.io/projected/d33e070d-1851-4242-98ee-97e68b203245-kube-api-access-882zm\") pod \"csi-node-driver-ms9md\" (UID: \"d33e070d-1851-4242-98ee-97e68b203245\") " pod="calico-system/csi-node-driver-ms9md" Jan 28 01:51:13.261469 kubelet[2967]: I0128 01:51:13.260226 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/606193a6-82d3-4faa-a3ef-4bde79cd518b-lib-modules\") pod \"calico-node-wkj6h\" (UID: \"606193a6-82d3-4faa-a3ef-4bde79cd518b\") " pod="calico-system/calico-node-wkj6h" Jan 28 01:51:13.261469 kubelet[2967]: I0128 01:51:13.260266 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/606193a6-82d3-4faa-a3ef-4bde79cd518b-node-certs\") pod \"calico-node-wkj6h\" (UID: \"606193a6-82d3-4faa-a3ef-4bde79cd518b\") " pod="calico-system/calico-node-wkj6h" Jan 28 01:51:13.261469 kubelet[2967]: I0128 01:51:13.260300 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/606193a6-82d3-4faa-a3ef-4bde79cd518b-cni-bin-dir\") pod \"calico-node-wkj6h\" (UID: \"606193a6-82d3-4faa-a3ef-4bde79cd518b\") " pod="calico-system/calico-node-wkj6h" Jan 28 01:51:13.261469 kubelet[2967]: I0128 01:51:13.260322 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/606193a6-82d3-4faa-a3ef-4bde79cd518b-tigera-ca-bundle\") pod \"calico-node-wkj6h\" (UID: \"606193a6-82d3-4faa-a3ef-4bde79cd518b\") " pod="calico-system/calico-node-wkj6h" Jan 28 01:51:13.261469 kubelet[2967]: I0128 01:51:13.260344 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/606193a6-82d3-4faa-a3ef-4bde79cd518b-var-lib-calico\") pod \"calico-node-wkj6h\" (UID: \"606193a6-82d3-4faa-a3ef-4bde79cd518b\") " pod="calico-system/calico-node-wkj6h" Jan 28 01:51:13.261975 kubelet[2967]: I0128 01:51:13.260366 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/606193a6-82d3-4faa-a3ef-4bde79cd518b-xtables-lock\") pod \"calico-node-wkj6h\" (UID: \"606193a6-82d3-4faa-a3ef-4bde79cd518b\") " pod="calico-system/calico-node-wkj6h" Jan 28 01:51:13.261975 kubelet[2967]: I0128 01:51:13.260386 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hstmv\" (UniqueName: \"kubernetes.io/projected/606193a6-82d3-4faa-a3ef-4bde79cd518b-kube-api-access-hstmv\") pod \"calico-node-wkj6h\" (UID: \"606193a6-82d3-4faa-a3ef-4bde79cd518b\") " pod="calico-system/calico-node-wkj6h" Jan 28 01:51:13.261975 kubelet[2967]: I0128 01:51:13.260412 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d33e070d-1851-4242-98ee-97e68b203245-kubelet-dir\") pod \"csi-node-driver-ms9md\" (UID: \"d33e070d-1851-4242-98ee-97e68b203245\") " pod="calico-system/csi-node-driver-ms9md" Jan 28 01:51:13.261975 kubelet[2967]: I0128 01:51:13.260435 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d33e070d-1851-4242-98ee-97e68b203245-registration-dir\") pod \"csi-node-driver-ms9md\" (UID: \"d33e070d-1851-4242-98ee-97e68b203245\") " pod="calico-system/csi-node-driver-ms9md" Jan 28 01:51:13.261975 kubelet[2967]: I0128 01:51:13.260505 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/606193a6-82d3-4faa-a3ef-4bde79cd518b-policysync\") pod \"calico-node-wkj6h\" (UID: \"606193a6-82d3-4faa-a3ef-4bde79cd518b\") " pod="calico-system/calico-node-wkj6h" Jan 28 01:51:13.262136 kubelet[2967]: I0128 01:51:13.260531 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d33e070d-1851-4242-98ee-97e68b203245-socket-dir\") pod \"csi-node-driver-ms9md\" (UID: \"d33e070d-1851-4242-98ee-97e68b203245\") " pod="calico-system/csi-node-driver-ms9md" Jan 28 01:51:13.262136 kubelet[2967]: I0128 01:51:13.260557 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/606193a6-82d3-4faa-a3ef-4bde79cd518b-var-run-calico\") pod \"calico-node-wkj6h\" (UID: \"606193a6-82d3-4faa-a3ef-4bde79cd518b\") " pod="calico-system/calico-node-wkj6h" Jan 28 01:51:13.265434 kubelet[2967]: I0128 01:51:13.265287 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/606193a6-82d3-4faa-a3ef-4bde79cd518b-cni-log-dir\") pod \"calico-node-wkj6h\" (UID: \"606193a6-82d3-4faa-a3ef-4bde79cd518b\") " pod="calico-system/calico-node-wkj6h" Jan 28 01:51:13.265434 kubelet[2967]: I0128 01:51:13.265335 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/606193a6-82d3-4faa-a3ef-4bde79cd518b-cni-net-dir\") pod \"calico-node-wkj6h\" (UID: \"606193a6-82d3-4faa-a3ef-4bde79cd518b\") " pod="calico-system/calico-node-wkj6h" Jan 28 01:51:13.265434 kubelet[2967]: I0128 01:51:13.265367 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/606193a6-82d3-4faa-a3ef-4bde79cd518b-flexvol-driver-host\") pod \"calico-node-wkj6h\" (UID: \"606193a6-82d3-4faa-a3ef-4bde79cd518b\") " pod="calico-system/calico-node-wkj6h" Jan 28 01:51:13.378373 kubelet[2967]: E0128 01:51:13.378248 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:51:13.378373 kubelet[2967]: W0128 01:51:13.378320 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:51:13.378373 kubelet[2967]: E0128 01:51:13.378346 2967 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:51:13.458277 kubelet[2967]: E0128 01:51:13.458027 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:51:13.458277 kubelet[2967]: W0128 01:51:13.458065 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:51:13.458277 kubelet[2967]: E0128 01:51:13.458093 2967 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:51:13.461368 kubelet[2967]: E0128 01:51:13.461293 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:51:13.461368 kubelet[2967]: W0128 01:51:13.461313 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:51:13.461368 kubelet[2967]: E0128 01:51:13.461333 2967 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:51:13.501927 kubelet[2967]: E0128 01:51:13.501597 2967 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 01:51:14.373405 kubelet[2967]: E0128 01:51:14.373233 2967 secret.go:189] Couldn't get secret calico-system/node-certs: failed to sync secret cache: timed out waiting for the condition Jan 28 01:51:14.374568 kubelet[2967]: E0128 01:51:14.374353 2967 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/606193a6-82d3-4faa-a3ef-4bde79cd518b-node-certs podName:606193a6-82d3-4faa-a3ef-4bde79cd518b nodeName:}" failed. No retries permitted until 2026-01-28 01:51:14.874321892 +0000 UTC m=+161.479233561 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-certs" (UniqueName: "kubernetes.io/secret/606193a6-82d3-4faa-a3ef-4bde79cd518b-node-certs") pod "calico-node-wkj6h" (UID: "606193a6-82d3-4faa-a3ef-4bde79cd518b") : failed to sync secret cache: timed out waiting for the condition Jan 28 01:51:14.393596 kubelet[2967]: E0128 01:51:14.393468 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:51:14.393596 kubelet[2967]: W0128 01:51:14.393502 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:51:14.393596 kubelet[2967]: E0128 01:51:14.393531 2967 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:51:14.497159 kubelet[2967]: E0128 01:51:14.497019 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:51:14.497159 kubelet[2967]: W0128 01:51:14.497055 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:51:14.497159 kubelet[2967]: E0128 01:51:14.497083 2967 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:51:14.607986 kubelet[2967]: E0128 01:51:14.606237 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:51:14.607986 kubelet[2967]: W0128 01:51:14.606307 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:51:14.607986 kubelet[2967]: E0128 01:51:14.606337 2967 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:51:14.726890 kubelet[2967]: E0128 01:51:14.722787 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:51:14.726890 kubelet[2967]: W0128 01:51:14.722965 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:51:14.726890 kubelet[2967]: E0128 01:51:14.723106 2967 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:51:14.832280 kubelet[2967]: E0128 01:51:14.832161 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:51:14.832280 kubelet[2967]: W0128 01:51:14.832198 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:51:14.832280 kubelet[2967]: E0128 01:51:14.832222 2967 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:51:14.940410 kubelet[2967]: E0128 01:51:14.939848 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:51:14.940410 kubelet[2967]: W0128 01:51:14.939880 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:51:14.940410 kubelet[2967]: E0128 01:51:14.939907 2967 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:51:14.946266 kubelet[2967]: E0128 01:51:14.944880 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:51:14.946266 kubelet[2967]: W0128 01:51:14.944992 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:51:14.946266 kubelet[2967]: E0128 01:51:14.945013 2967 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:51:14.947387 kubelet[2967]: E0128 01:51:14.946930 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:51:14.947387 kubelet[2967]: W0128 01:51:14.946945 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:51:14.947387 kubelet[2967]: E0128 01:51:14.946961 2967 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:51:14.951096 kubelet[2967]: E0128 01:51:14.949795 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:51:14.951096 kubelet[2967]: W0128 01:51:14.949815 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:51:14.951096 kubelet[2967]: E0128 01:51:14.949963 2967 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:51:14.965004 kubelet[2967]: E0128 01:51:14.960050 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:51:14.965004 kubelet[2967]: W0128 01:51:14.960080 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:51:14.965004 kubelet[2967]: E0128 01:51:14.960110 2967 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:51:15.007217 kubelet[2967]: E0128 01:51:15.002924 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:51:15.007217 kubelet[2967]: W0128 01:51:15.002957 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:51:15.007217 kubelet[2967]: E0128 01:51:15.002988 2967 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:51:15.079611 kubelet[2967]: E0128 01:51:15.071932 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:51:15.079871 containerd[1609]: time="2026-01-28T01:51:15.075354321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wkj6h,Uid:606193a6-82d3-4faa-a3ef-4bde79cd518b,Namespace:calico-system,Attempt:0,}" Jan 28 01:51:15.189777 kubelet[2967]: E0128 01:51:15.189480 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:51:15.340216 containerd[1609]: time="2026-01-28T01:51:15.338629976Z" level=info msg="connecting to shim 03c4c464ce884579f86c8423c1f1c099c051c3b727b3c1b00c231655d3b90b5e" address="unix:///run/containerd/s/f9a591991b1f8683c4081b2a7c46b7155d5ae3c18b3032a72e2c0d88fbcf5b12" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:51:15.798600 systemd[1]: Started cri-containerd-03c4c464ce884579f86c8423c1f1c099c051c3b727b3c1b00c231655d3b90b5e.scope - libcontainer container 03c4c464ce884579f86c8423c1f1c099c051c3b727b3c1b00c231655d3b90b5e. Jan 28 01:51:15.926000 audit: BPF prog-id=172 op=LOAD Jan 28 01:51:15.940212 kernel: kauditd_printk_skb: 16 callbacks suppressed Jan 28 01:51:15.940382 kernel: audit: type=1334 audit(1769565075.926:593): prog-id=172 op=LOAD Jan 28 01:51:15.958974 kernel: audit: type=1334 audit(1769565075.929:594): prog-id=173 op=LOAD Jan 28 01:51:15.929000 audit: BPF prog-id=173 op=LOAD Jan 28 01:51:15.929000 audit[3678]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3665 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:51:16.077437 kernel: audit: type=1300 audit(1769565075.929:594): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3665 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:51:16.077857 kernel: audit: type=1327 audit(1769565075.929:594): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033633463343634636538383435373966383663383432336331663163 Jan 28 01:51:15.929000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033633463343634636538383435373966383663383432336331663163 Jan 28 01:51:16.124212 kernel: audit: type=1334 audit(1769565075.929:595): prog-id=173 op=UNLOAD Jan 28 01:51:15.929000 audit: BPF prog-id=173 op=UNLOAD Jan 28 01:51:15.929000 audit[3678]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3665 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:51:15.929000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033633463343634636538383435373966383663383432336331663163 Jan 28 01:51:16.232267 kernel: audit: type=1300 audit(1769565075.929:595): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3665 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:51:16.232433 kernel: audit: type=1327 audit(1769565075.929:595): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033633463343634636538383435373966383663383432336331663163 Jan 28 01:51:16.247297 kernel: audit: type=1334 audit(1769565075.943:596): prog-id=174 op=LOAD Jan 28 01:51:15.943000 audit: BPF prog-id=174 op=LOAD Jan 28 01:51:16.301981 kernel: audit: type=1300 audit(1769565075.943:596): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3665 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:51:15.943000 audit[3678]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3665 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:51:16.341206 kernel: audit: type=1327 audit(1769565075.943:596): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033633463343634636538383435373966383663383432336331663163 Jan 28 01:51:15.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033633463343634636538383435373966383663383432336331663163 Jan 28 01:51:15.943000 audit: BPF prog-id=175 op=LOAD Jan 28 01:51:15.943000 audit[3678]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3665 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:51:15.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033633463343634636538383435373966383663383432336331663163 Jan 28 01:51:15.943000 audit: BPF prog-id=175 op=UNLOAD Jan 28 01:51:15.943000 audit[3678]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3665 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:51:15.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033633463343634636538383435373966383663383432336331663163 Jan 28 01:51:15.943000 audit: BPF prog-id=174 op=UNLOAD Jan 28 01:51:15.943000 audit[3678]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3665 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:51:15.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033633463343634636538383435373966383663383432336331663163 Jan 28 01:51:15.943000 audit: BPF prog-id=176 op=LOAD Jan 28 01:51:15.943000 audit[3678]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3665 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:51:15.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033633463343634636538383435373966383663383432336331663163 Jan 28 01:51:16.483143 containerd[1609]: time="2026-01-28T01:51:16.482933637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wkj6h,Uid:606193a6-82d3-4faa-a3ef-4bde79cd518b,Namespace:calico-system,Attempt:0,} returns sandbox id \"03c4c464ce884579f86c8423c1f1c099c051c3b727b3c1b00c231655d3b90b5e\"" Jan 28 01:51:16.495405 kubelet[2967]: E0128 01:51:16.490531 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:51:16.504497 containerd[1609]: time="2026-01-28T01:51:16.504191212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 28 01:51:17.186047 kubelet[2967]: E0128 01:51:17.185988 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:51:17.491023 kubelet[2967]: E0128 01:51:17.490202 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:51:17.599474 kubelet[2967]: E0128 01:51:17.596450 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:51:17.599474 kubelet[2967]: W0128 01:51:17.599135 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:51:17.604951 kubelet[2967]: E0128 01:51:17.601296 2967 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:51:17.611372 kubelet[2967]: E0128 01:51:17.608636 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:51:17.611372 kubelet[2967]: W0128 01:51:17.608826 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:51:17.611372 kubelet[2967]: E0128 01:51:17.608858 2967 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:51:17.622994 kubelet[2967]: E0128 01:51:17.621482 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:51:17.622994 kubelet[2967]: W0128 01:51:17.621514 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:51:17.622994 kubelet[2967]: E0128 01:51:17.621543 2967 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:51:17.637400 kubelet[2967]: E0128 01:51:17.637354 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:51:17.638997 kubelet[2967]: W0128 01:51:17.638539 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:51:17.638997 kubelet[2967]: E0128 01:51:17.638586 2967 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:51:17.648905 kubelet[2967]: E0128 01:51:17.646627 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:51:17.648905 kubelet[2967]: W0128 01:51:17.646647 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:51:17.648905 kubelet[2967]: E0128 01:51:17.647285 2967 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:51:18.547461 containerd[1609]: time="2026-01-28T01:51:18.540017587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:51:18.581242 containerd[1609]: time="2026-01-28T01:51:18.550375059Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4442579" Jan 28 01:51:18.581242 containerd[1609]: time="2026-01-28T01:51:18.566258228Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:51:18.582914 kubelet[2967]: E0128 01:51:18.572078 2967 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 01:51:18.591289 containerd[1609]: time="2026-01-28T01:51:18.586425499Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:51:18.591289 containerd[1609]: time="2026-01-28T01:51:18.587498270Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.083124136s" Jan 28 01:51:18.591289 containerd[1609]: time="2026-01-28T01:51:18.587535980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 28 01:51:18.617076 containerd[1609]: time="2026-01-28T01:51:18.606535282Z" level=info msg="CreateContainer within sandbox \"03c4c464ce884579f86c8423c1f1c099c051c3b727b3c1b00c231655d3b90b5e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 28 01:51:18.774992 containerd[1609]: time="2026-01-28T01:51:18.774297517Z" level=info msg="Container f2f9c9de0f2f74607cb005baa83d50286b86ce507ab0f38859b199bcfb1c6d3f: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:51:18.866926 containerd[1609]: time="2026-01-28T01:51:18.866336017Z" level=info msg="CreateContainer within sandbox \"03c4c464ce884579f86c8423c1f1c099c051c3b727b3c1b00c231655d3b90b5e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f2f9c9de0f2f74607cb005baa83d50286b86ce507ab0f38859b199bcfb1c6d3f\"" Jan 28 01:51:18.867628 containerd[1609]: time="2026-01-28T01:51:18.867503876Z" level=info msg="StartContainer for \"f2f9c9de0f2f74607cb005baa83d50286b86ce507ab0f38859b199bcfb1c6d3f\"" Jan 28 01:51:18.879016 containerd[1609]: time="2026-01-28T01:51:18.878959947Z" level=info msg="connecting to shim f2f9c9de0f2f74607cb005baa83d50286b86ce507ab0f38859b199bcfb1c6d3f" address="unix:///run/containerd/s/f9a591991b1f8683c4081b2a7c46b7155d5ae3c18b3032a72e2c0d88fbcf5b12" protocol=ttrpc version=3 Jan 28 01:51:19.109906 systemd[1]: Started cri-containerd-f2f9c9de0f2f74607cb005baa83d50286b86ce507ab0f38859b199bcfb1c6d3f.scope - libcontainer container f2f9c9de0f2f74607cb005baa83d50286b86ce507ab0f38859b199bcfb1c6d3f. Jan 28 01:51:19.191281 kubelet[2967]: E0128 01:51:19.186564 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:51:19.293307 kubelet[2967]: E0128 01:51:19.293204 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:51:19.293307 kubelet[2967]: W0128 01:51:19.293268 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:51:19.293307 kubelet[2967]: E0128 01:51:19.293301 2967 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:51:19.297661 kubelet[2967]: E0128 01:51:19.296625 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:51:19.302467 kubelet[2967]: W0128 01:51:19.301796 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:51:19.302467 kubelet[2967]: E0128 01:51:19.301834 2967 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:51:19.307020 kubelet[2967]: E0128 01:51:19.304955 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:51:19.307020 kubelet[2967]: W0128 01:51:19.306797 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:51:19.307020 kubelet[2967]: E0128 01:51:19.306826 2967 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:51:19.325540 kubelet[2967]: E0128 01:51:19.324488 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:51:19.325540 kubelet[2967]: W0128 01:51:19.324553 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:51:19.325540 kubelet[2967]: E0128 01:51:19.324585 2967 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:51:19.329356 kubelet[2967]: E0128 01:51:19.328881 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:51:19.330265 kubelet[2967]: W0128 01:51:19.328904 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:51:19.330265 kubelet[2967]: E0128 01:51:19.329851 2967 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:51:19.338454 kubelet[2967]: E0128 01:51:19.336581 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:51:19.338454 kubelet[2967]: W0128 01:51:19.338084 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:51:19.342823 kubelet[2967]: E0128 01:51:19.339082 2967 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:51:19.344242 kubelet[2967]: E0128 01:51:19.344000 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:51:19.344242 kubelet[2967]: W0128 01:51:19.344025 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:51:19.344242 kubelet[2967]: E0128 01:51:19.344048 2967 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:51:19.358357 kubelet[2967]: E0128 01:51:19.358201 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:51:19.369235 kubelet[2967]: W0128 01:51:19.367503 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:51:19.369235 kubelet[2967]: E0128 01:51:19.367556 2967 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:51:19.379435 kubelet[2967]: E0128 01:51:19.379089 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:51:19.379435 kubelet[2967]: W0128 01:51:19.379160 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:51:19.379435 kubelet[2967]: E0128 01:51:19.379188 2967 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:51:19.387480 kubelet[2967]: E0128 01:51:19.383465 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:51:19.387480 kubelet[2967]: W0128 01:51:19.383525 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:51:19.387480 kubelet[2967]: E0128 01:51:19.383554 2967 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:51:19.393415 kubelet[2967]: E0128 01:51:19.392339 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:51:19.393415 kubelet[2967]: W0128 01:51:19.392366 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:51:19.393415 kubelet[2967]: E0128 01:51:19.392387 2967 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:51:19.398944 kubelet[2967]: E0128 01:51:19.398118 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:51:19.398944 kubelet[2967]: W0128 01:51:19.398183 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:51:19.398944 kubelet[2967]: E0128 01:51:19.398211 2967 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:51:19.401101 kubelet[2967]: E0128 01:51:19.399578 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:51:19.401101 kubelet[2967]: W0128 01:51:19.399636 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:51:19.401101 kubelet[2967]: E0128 01:51:19.399656 2967 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:51:19.419019 kubelet[2967]: E0128 01:51:19.412631 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:51:19.419019 kubelet[2967]: W0128 01:51:19.417273 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:51:19.419019 kubelet[2967]: E0128 01:51:19.417436 2967 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:51:19.461402 kubelet[2967]: E0128 01:51:19.460395 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:51:19.476909 kubelet[2967]: W0128 01:51:19.474662 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:51:19.477905 kubelet[2967]: E0128 01:51:19.476922 2967 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:51:19.494000 audit: BPF prog-id=177 op=LOAD Jan 28 01:51:19.494000 audit[3712]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3665 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:51:19.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632663963396465306632663734363037636230303562616138336435 Jan 28 01:51:19.494000 audit: BPF prog-id=178 op=LOAD Jan 28 01:51:19.494000 audit[3712]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3665 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:51:19.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632663963396465306632663734363037636230303562616138336435 Jan 28 01:51:19.494000 audit: BPF prog-id=178 op=UNLOAD Jan 28 01:51:19.494000 audit[3712]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3665 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:51:19.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632663963396465306632663734363037636230303562616138336435 Jan 28 01:51:19.494000 audit: BPF prog-id=177 op=UNLOAD Jan 28 01:51:19.494000 audit[3712]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3665 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:51:19.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632663963396465306632663734363037636230303562616138336435 Jan 28 01:51:19.494000 audit: BPF prog-id=179 op=LOAD Jan 28 01:51:19.494000 audit[3712]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3665 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:51:19.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632663963396465306632663734363037636230303562616138336435 Jan 28 01:51:19.568087 kubelet[2967]: E0128 01:51:19.561513 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:51:19.573063 kubelet[2967]: W0128 01:51:19.571601 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:51:19.573063 kubelet[2967]: E0128 01:51:19.571844 2967 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:51:19.897939 containerd[1609]: time="2026-01-28T01:51:19.895258917Z" level=info msg="StartContainer for \"f2f9c9de0f2f74607cb005baa83d50286b86ce507ab0f38859b199bcfb1c6d3f\" returns successfully" Jan 28 01:51:20.285274 systemd[1]: cri-containerd-f2f9c9de0f2f74607cb005baa83d50286b86ce507ab0f38859b199bcfb1c6d3f.scope: Deactivated successfully. Jan 28 01:51:20.301000 audit: BPF prog-id=179 op=UNLOAD Jan 28 01:51:20.403945 containerd[1609]: time="2026-01-28T01:51:20.400168994Z" level=info msg="received container exit event container_id:\"f2f9c9de0f2f74607cb005baa83d50286b86ce507ab0f38859b199bcfb1c6d3f\" id:\"f2f9c9de0f2f74607cb005baa83d50286b86ce507ab0f38859b199bcfb1c6d3f\" pid:3724 exited_at:{seconds:1769565080 nanos:383441620}" Jan 28 01:51:20.811237 kubelet[2967]: E0128 01:51:20.809013 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:51:20.822646 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2f9c9de0f2f74607cb005baa83d50286b86ce507ab0f38859b199bcfb1c6d3f-rootfs.mount: Deactivated successfully. Jan 28 01:51:21.199291 kubelet[2967]: E0128 01:51:21.190564 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:51:21.825823 kubelet[2967]: E0128 01:51:21.825506 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:51:21.856971 containerd[1609]: time="2026-01-28T01:51:21.838536319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 28 01:51:23.211256 kubelet[2967]: E0128 01:51:23.204514 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:51:23.639896 kubelet[2967]: E0128 01:51:23.635825 2967 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 01:51:25.214845 kubelet[2967]: E0128 01:51:25.213601 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:51:27.197173 kubelet[2967]: E0128 01:51:27.188012 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:51:27.512851 kubelet[2967]: E0128 01:51:27.507473 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:51:28.029539 kubelet[2967]: E0128 01:51:28.029479 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:51:28.650535 kubelet[2967]: E0128 01:51:28.650491 2967 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 01:51:29.185508 kubelet[2967]: E0128 01:51:29.185407 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:51:31.188101 kubelet[2967]: E0128 01:51:31.187775 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:51:33.194837 kubelet[2967]: E0128 01:51:33.189404 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:51:33.677026 kubelet[2967]: E0128 01:51:33.676962 2967 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 01:51:35.187987 kubelet[2967]: E0128 01:51:35.185522 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:51:37.190470 kubelet[2967]: E0128 01:51:37.190397 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:51:38.697716 kubelet[2967]: E0128 01:51:38.695175 2967 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 01:51:38.741590 containerd[1609]: time="2026-01-28T01:51:38.741532478Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 28 01:51:38.745303 containerd[1609]: time="2026-01-28T01:51:38.745230215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:51:38.773453 containerd[1609]: time="2026-01-28T01:51:38.768301508Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:51:38.817347 containerd[1609]: time="2026-01-28T01:51:38.817281767Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:51:38.820174 containerd[1609]: time="2026-01-28T01:51:38.819964133Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 16.98137174s" Jan 28 01:51:38.820174 containerd[1609]: time="2026-01-28T01:51:38.820060891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 28 01:51:38.889124 containerd[1609]: time="2026-01-28T01:51:38.888974471Z" level=info msg="CreateContainer within sandbox \"03c4c464ce884579f86c8423c1f1c099c051c3b727b3c1b00c231655d3b90b5e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 28 01:51:39.020732 containerd[1609]: time="2026-01-28T01:51:39.018024425Z" level=info msg="Container 81419c4ce14500e649a57385ceb2b12e707b01e334d037be7e278cbf0996fe16: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:51:39.193039 kubelet[2967]: E0128 01:51:39.186300 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:51:39.211227 containerd[1609]: time="2026-01-28T01:51:39.211046319Z" level=info msg="CreateContainer within sandbox \"03c4c464ce884579f86c8423c1f1c099c051c3b727b3c1b00c231655d3b90b5e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"81419c4ce14500e649a57385ceb2b12e707b01e334d037be7e278cbf0996fe16\"" Jan 28 01:51:39.214753 containerd[1609]: time="2026-01-28T01:51:39.212657717Z" level=info msg="StartContainer for \"81419c4ce14500e649a57385ceb2b12e707b01e334d037be7e278cbf0996fe16\"" Jan 28 01:51:39.223202 containerd[1609]: time="2026-01-28T01:51:39.223100119Z" level=info msg="connecting to shim 81419c4ce14500e649a57385ceb2b12e707b01e334d037be7e278cbf0996fe16" address="unix:///run/containerd/s/f9a591991b1f8683c4081b2a7c46b7155d5ae3c18b3032a72e2c0d88fbcf5b12" protocol=ttrpc version=3 Jan 28 01:51:39.467001 systemd[1]: Started cri-containerd-81419c4ce14500e649a57385ceb2b12e707b01e334d037be7e278cbf0996fe16.scope - libcontainer container 81419c4ce14500e649a57385ceb2b12e707b01e334d037be7e278cbf0996fe16. Jan 28 01:51:39.987635 kernel: kauditd_printk_skb: 28 callbacks suppressed Jan 28 01:51:39.987901 kernel: audit: type=1334 audit(1769565099.968:607): prog-id=180 op=LOAD Jan 28 01:51:39.968000 audit: BPF prog-id=180 op=LOAD Jan 28 01:51:40.065627 kernel: audit: type=1300 audit(1769565099.968:607): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3665 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:51:39.968000 audit[3788]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3665 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:51:39.968000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831343139633463653134353030653634396135373338356365623262 Jan 28 01:51:40.123814 kernel: audit: type=1327 audit(1769565099.968:607): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831343139633463653134353030653634396135373338356365623262 Jan 28 01:51:40.123992 kernel: audit: type=1334 audit(1769565099.978:608): prog-id=181 op=LOAD Jan 28 01:51:39.978000 audit: BPF prog-id=181 op=LOAD Jan 28 01:51:39.978000 audit[3788]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3665 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:51:40.187158 kernel: audit: type=1300 audit(1769565099.978:608): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3665 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:51:39.978000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831343139633463653134353030653634396135373338356365623262 Jan 28 01:51:40.220742 kernel: audit: type=1327 audit(1769565099.978:608): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831343139633463653134353030653634396135373338356365623262 Jan 28 01:51:39.978000 audit: BPF prog-id=181 op=UNLOAD Jan 28 01:51:39.978000 audit[3788]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3665 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:51:40.251131 kernel: audit: type=1334 audit(1769565099.978:609): prog-id=181 op=UNLOAD Jan 28 01:51:40.251306 kernel: audit: type=1300 audit(1769565099.978:609): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3665 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:51:40.251366 kernel: audit: type=1327 audit(1769565099.978:609): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831343139633463653134353030653634396135373338356365623262 Jan 28 01:51:39.978000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831343139633463653134353030653634396135373338356365623262 Jan 28 01:51:40.280744 kernel: audit: type=1334 audit(1769565099.978:610): prog-id=180 op=UNLOAD Jan 28 01:51:39.978000 audit: BPF prog-id=180 op=UNLOAD Jan 28 01:51:39.978000 audit[3788]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3665 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:51:39.978000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831343139633463653134353030653634396135373338356365623262 Jan 28 01:51:39.978000 audit: BPF prog-id=182 op=LOAD Jan 28 01:51:39.978000 audit[3788]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3665 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:51:39.978000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831343139633463653134353030653634396135373338356365623262 Jan 28 01:51:40.406837 containerd[1609]: time="2026-01-28T01:51:40.402660740Z" level=info msg="StartContainer for \"81419c4ce14500e649a57385ceb2b12e707b01e334d037be7e278cbf0996fe16\" returns successfully" Jan 28 01:51:41.236332 kubelet[2967]: E0128 01:51:41.233379 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:51:41.338625 kubelet[2967]: E0128 01:51:41.336907 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:51:42.373597 kubelet[2967]: E0128 01:51:42.371338 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:51:43.224341 kubelet[2967]: E0128 01:51:43.207836 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:51:43.224341 kubelet[2967]: E0128 01:51:43.210396 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:51:43.714128 kubelet[2967]: E0128 01:51:43.708891 2967 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 01:51:45.186874 kubelet[2967]: E0128 01:51:45.186017 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:51:47.196557 kubelet[2967]: E0128 01:51:47.188142 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:51:48.224036 systemd[1]: cri-containerd-81419c4ce14500e649a57385ceb2b12e707b01e334d037be7e278cbf0996fe16.scope: Deactivated successfully. Jan 28 01:51:48.226260 systemd[1]: cri-containerd-81419c4ce14500e649a57385ceb2b12e707b01e334d037be7e278cbf0996fe16.scope: Consumed 2.674s CPU time, 179.5M memory peak, 3.4M read from disk, 171.3M written to disk. Jan 28 01:51:48.267994 containerd[1609]: time="2026-01-28T01:51:48.249167266Z" level=info msg="received container exit event container_id:\"81419c4ce14500e649a57385ceb2b12e707b01e334d037be7e278cbf0996fe16\" id:\"81419c4ce14500e649a57385ceb2b12e707b01e334d037be7e278cbf0996fe16\" pid:3802 exited_at:{seconds:1769565108 nanos:241020936}" Jan 28 01:51:48.291044 kernel: kauditd_printk_skb: 5 callbacks suppressed Jan 28 01:51:48.291162 kernel: audit: type=1334 audit(1769565108.265:612): prog-id=182 op=UNLOAD Jan 28 01:51:48.265000 audit: BPF prog-id=182 op=UNLOAD Jan 28 01:51:48.860493 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81419c4ce14500e649a57385ceb2b12e707b01e334d037be7e278cbf0996fe16-rootfs.mount: Deactivated successfully. Jan 28 01:51:49.274138 systemd[1]: Created slice kubepods-besteffort-podd33e070d_1851_4242_98ee_97e68b203245.slice - libcontainer container kubepods-besteffort-podd33e070d_1851_4242_98ee_97e68b203245.slice. Jan 28 01:51:49.305736 containerd[1609]: time="2026-01-28T01:51:49.305545326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ms9md,Uid:d33e070d-1851-4242-98ee-97e68b203245,Namespace:calico-system,Attempt:0,}" Jan 28 01:51:49.575494 kubelet[2967]: E0128 01:51:49.573429 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:51:49.687775 containerd[1609]: time="2026-01-28T01:51:49.687322167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 28 01:51:50.610825 containerd[1609]: time="2026-01-28T01:51:50.610766582Z" level=error msg="Failed to destroy network for sandbox \"6f21c06f7c3832ccdd750a082c0db6d15e92884fa9103e86d13d1597dd88b6d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:51:50.641125 systemd[1]: run-netns-cni\x2df93fd467\x2d5d06\x2ddebf\x2dc5ff\x2d4d153988019c.mount: Deactivated successfully. Jan 28 01:51:50.679408 containerd[1609]: time="2026-01-28T01:51:50.677271233Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ms9md,Uid:d33e070d-1851-4242-98ee-97e68b203245,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f21c06f7c3832ccdd750a082c0db6d15e92884fa9103e86d13d1597dd88b6d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:51:50.681472 kubelet[2967]: E0128 01:51:50.678197 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f21c06f7c3832ccdd750a082c0db6d15e92884fa9103e86d13d1597dd88b6d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:51:50.681472 kubelet[2967]: E0128 01:51:50.681091 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f21c06f7c3832ccdd750a082c0db6d15e92884fa9103e86d13d1597dd88b6d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ms9md" Jan 28 01:51:50.681472 kubelet[2967]: E0128 01:51:50.681126 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f21c06f7c3832ccdd750a082c0db6d15e92884fa9103e86d13d1597dd88b6d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ms9md" Jan 28 01:51:50.682187 kubelet[2967]: E0128 01:51:50.681185 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ms9md_calico-system(d33e070d-1851-4242-98ee-97e68b203245)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ms9md_calico-system(d33e070d-1851-4242-98ee-97e68b203245)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f21c06f7c3832ccdd750a082c0db6d15e92884fa9103e86d13d1597dd88b6d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:51:56.913068 systemd[1]: Created slice kubepods-besteffort-pod67371941_5272_4e0e_84ef_cf7de9065a57.slice - libcontainer container kubepods-besteffort-pod67371941_5272_4e0e_84ef_cf7de9065a57.slice. Jan 28 01:51:57.023206 kubelet[2967]: I0128 01:51:57.013904 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tgn4\" (UniqueName: \"kubernetes.io/projected/67371941-5272-4e0e-84ef-cf7de9065a57-kube-api-access-4tgn4\") pod \"calico-kube-controllers-849fc56f8-v9sqx\" (UID: \"67371941-5272-4e0e-84ef-cf7de9065a57\") " pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" Jan 28 01:51:57.023206 kubelet[2967]: I0128 01:51:57.013964 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67371941-5272-4e0e-84ef-cf7de9065a57-tigera-ca-bundle\") pod \"calico-kube-controllers-849fc56f8-v9sqx\" (UID: \"67371941-5272-4e0e-84ef-cf7de9065a57\") " pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" Jan 28 01:51:57.129429 kubelet[2967]: I0128 01:51:57.128191 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0da3871e-a4b1-42ab-9e6b-d2183806355d-config-volume\") pod \"coredns-674b8bbfcf-h25bw\" (UID: \"0da3871e-a4b1-42ab-9e6b-d2183806355d\") " pod="kube-system/coredns-674b8bbfcf-h25bw" Jan 28 01:51:57.129429 kubelet[2967]: I0128 01:51:57.128366 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9fkv\" (UniqueName: \"kubernetes.io/projected/0da3871e-a4b1-42ab-9e6b-d2183806355d-kube-api-access-z9fkv\") pod \"coredns-674b8bbfcf-h25bw\" (UID: \"0da3871e-a4b1-42ab-9e6b-d2183806355d\") " pod="kube-system/coredns-674b8bbfcf-h25bw" Jan 28 01:51:57.204182 systemd[1]: Created slice kubepods-burstable-pod0da3871e_a4b1_42ab_9e6b_d2183806355d.slice - libcontainer container kubepods-burstable-pod0da3871e_a4b1_42ab_9e6b_d2183806355d.slice. Jan 28 01:51:57.392110 kubelet[2967]: I0128 01:51:57.384366 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxgqq\" (UniqueName: \"kubernetes.io/projected/95f14950-b00b-4ddf-81a4-ed49d84ddcff-kube-api-access-qxgqq\") pod \"coredns-674b8bbfcf-gcgtc\" (UID: \"95f14950-b00b-4ddf-81a4-ed49d84ddcff\") " pod="kube-system/coredns-674b8bbfcf-gcgtc" Jan 28 01:51:57.406148 kubelet[2967]: I0128 01:51:57.393383 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95f14950-b00b-4ddf-81a4-ed49d84ddcff-config-volume\") pod \"coredns-674b8bbfcf-gcgtc\" (UID: \"95f14950-b00b-4ddf-81a4-ed49d84ddcff\") " pod="kube-system/coredns-674b8bbfcf-gcgtc" Jan 28 01:51:57.517444 kubelet[2967]: I0128 01:51:57.494851 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3ef171ed-8146-4d6a-9063-eb31677aa1d4-calico-apiserver-certs\") pod \"calico-apiserver-654b4ddbfd-mgclm\" (UID: \"3ef171ed-8146-4d6a-9063-eb31677aa1d4\") " pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" Jan 28 01:51:57.517444 kubelet[2967]: I0128 01:51:57.494981 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjkwn\" (UniqueName: \"kubernetes.io/projected/ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9-kube-api-access-jjkwn\") pod \"calico-apiserver-654b4ddbfd-mbn64\" (UID: \"ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9\") " pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" Jan 28 01:51:57.517444 kubelet[2967]: I0128 01:51:57.495013 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rq4b\" (UniqueName: \"kubernetes.io/projected/3ef171ed-8146-4d6a-9063-eb31677aa1d4-kube-api-access-2rq4b\") pod \"calico-apiserver-654b4ddbfd-mgclm\" (UID: \"3ef171ed-8146-4d6a-9063-eb31677aa1d4\") " pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" Jan 28 01:51:57.517444 kubelet[2967]: I0128 01:51:57.495037 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9-calico-apiserver-certs\") pod \"calico-apiserver-654b4ddbfd-mbn64\" (UID: \"ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9\") " pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" Jan 28 01:51:57.537003 systemd[1]: Created slice kubepods-burstable-pod95f14950_b00b_4ddf_81a4_ed49d84ddcff.slice - libcontainer container kubepods-burstable-pod95f14950_b00b_4ddf_81a4_ed49d84ddcff.slice. Jan 28 01:51:57.662798 systemd[1]: Created slice kubepods-besteffort-pod3ef171ed_8146_4d6a_9063_eb31677aa1d4.slice - libcontainer container kubepods-besteffort-pod3ef171ed_8146_4d6a_9063_eb31677aa1d4.slice. Jan 28 01:51:57.708313 kubelet[2967]: I0128 01:51:57.708214 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c0bf93b-f071-4ad6-aeca-bf378e20fc97-whisker-ca-bundle\") pod \"whisker-5f4986c4f8-cxtwp\" (UID: \"7c0bf93b-f071-4ad6-aeca-bf378e20fc97\") " pod="calico-system/whisker-5f4986c4f8-cxtwp" Jan 28 01:51:57.714546 kubelet[2967]: I0128 01:51:57.714413 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkzbl\" (UniqueName: \"kubernetes.io/projected/7c0bf93b-f071-4ad6-aeca-bf378e20fc97-kube-api-access-rkzbl\") pod \"whisker-5f4986c4f8-cxtwp\" (UID: \"7c0bf93b-f071-4ad6-aeca-bf378e20fc97\") " pod="calico-system/whisker-5f4986c4f8-cxtwp" Jan 28 01:51:57.715984 kubelet[2967]: I0128 01:51:57.715890 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7c0bf93b-f071-4ad6-aeca-bf378e20fc97-whisker-backend-key-pair\") pod \"whisker-5f4986c4f8-cxtwp\" (UID: \"7c0bf93b-f071-4ad6-aeca-bf378e20fc97\") " pod="calico-system/whisker-5f4986c4f8-cxtwp" Jan 28 01:51:57.830215 kubelet[2967]: I0128 01:51:57.829614 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/be8a6b52-634d-45dc-a492-0c042b64c6df-goldmane-key-pair\") pod \"goldmane-666569f655-nv2sz\" (UID: \"be8a6b52-634d-45dc-a492-0c042b64c6df\") " pod="calico-system/goldmane-666569f655-nv2sz" Jan 28 01:51:57.830215 kubelet[2967]: I0128 01:51:57.829799 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be8a6b52-634d-45dc-a492-0c042b64c6df-goldmane-ca-bundle\") pod \"goldmane-666569f655-nv2sz\" (UID: \"be8a6b52-634d-45dc-a492-0c042b64c6df\") " pod="calico-system/goldmane-666569f655-nv2sz" Jan 28 01:51:57.830215 kubelet[2967]: I0128 01:51:57.829842 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be8a6b52-634d-45dc-a492-0c042b64c6df-config\") pod \"goldmane-666569f655-nv2sz\" (UID: \"be8a6b52-634d-45dc-a492-0c042b64c6df\") " pod="calico-system/goldmane-666569f655-nv2sz" Jan 28 01:51:57.830215 kubelet[2967]: I0128 01:51:57.830046 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w48wh\" (UniqueName: \"kubernetes.io/projected/be8a6b52-634d-45dc-a492-0c042b64c6df-kube-api-access-w48wh\") pod \"goldmane-666569f655-nv2sz\" (UID: \"be8a6b52-634d-45dc-a492-0c042b64c6df\") " pod="calico-system/goldmane-666569f655-nv2sz" Jan 28 01:51:57.863943 kubelet[2967]: E0128 01:51:57.858836 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:51:57.894288 containerd[1609]: time="2026-01-28T01:51:57.888305622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h25bw,Uid:0da3871e-a4b1-42ab-9e6b-d2183806355d,Namespace:kube-system,Attempt:0,}" Jan 28 01:51:57.910031 kubelet[2967]: E0128 01:51:57.909850 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:51:58.125000 containerd[1609]: time="2026-01-28T01:51:58.117819119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gcgtc,Uid:95f14950-b00b-4ddf-81a4-ed49d84ddcff,Namespace:kube-system,Attempt:0,}" Jan 28 01:51:58.213354 systemd[1]: Created slice kubepods-besteffort-pod7c0bf93b_f071_4ad6_aeca_bf378e20fc97.slice - libcontainer container kubepods-besteffort-pod7c0bf93b_f071_4ad6_aeca_bf378e20fc97.slice. Jan 28 01:51:58.336144 containerd[1609]: time="2026-01-28T01:51:58.336096640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-849fc56f8-v9sqx,Uid:67371941-5272-4e0e-84ef-cf7de9065a57,Namespace:calico-system,Attempt:0,}" Jan 28 01:51:58.395560 containerd[1609]: time="2026-01-28T01:51:58.390810436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654b4ddbfd-mgclm,Uid:3ef171ed-8146-4d6a-9063-eb31677aa1d4,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:51:58.432980 containerd[1609]: time="2026-01-28T01:51:58.432875289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f4986c4f8-cxtwp,Uid:7c0bf93b-f071-4ad6-aeca-bf378e20fc97,Namespace:calico-system,Attempt:0,}" Jan 28 01:51:58.540954 systemd[1]: Created slice kubepods-besteffort-podae5a1f75_fd39_4d6a_a16f_43b6b8db37e9.slice - libcontainer container kubepods-besteffort-podae5a1f75_fd39_4d6a_a16f_43b6b8db37e9.slice. Jan 28 01:51:58.769450 containerd[1609]: time="2026-01-28T01:51:58.763873586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654b4ddbfd-mbn64,Uid:ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:51:58.792876 systemd[1]: Created slice kubepods-besteffort-podbe8a6b52_634d_45dc_a492_0c042b64c6df.slice - libcontainer container kubepods-besteffort-podbe8a6b52_634d_45dc_a492_0c042b64c6df.slice. Jan 28 01:51:58.835712 containerd[1609]: time="2026-01-28T01:51:58.833085960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nv2sz,Uid:be8a6b52-634d-45dc-a492-0c042b64c6df,Namespace:calico-system,Attempt:0,}" Jan 28 01:52:00.212872 containerd[1609]: time="2026-01-28T01:52:00.212461433Z" level=error msg="Failed to destroy network for sandbox \"720aedd222066267939ecc06b2432cc8ea24bf9dff5ddf819f55bcc662f7ba6a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:00.236886 systemd[1]: run-netns-cni\x2dacefc458\x2d3298\x2d45bf\x2d9951\x2d17f1b4a51110.mount: Deactivated successfully. Jan 28 01:52:00.329567 containerd[1609]: time="2026-01-28T01:52:00.329466021Z" level=error msg="Failed to destroy network for sandbox \"3f4e203ecdabbe706cd7c622830986e617be5a7955f88dde8a1a8acfba074cbd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:00.352130 systemd[1]: run-netns-cni\x2d5342b6bc\x2dd43a\x2ded56\x2d1dcc\x2da03f85149619.mount: Deactivated successfully. Jan 28 01:52:00.495944 containerd[1609]: time="2026-01-28T01:52:00.494910771Z" level=error msg="Failed to destroy network for sandbox \"72ab1a709879a23b4a7a1ce270d292c81e38a9cb9e3af02fc6f9d05fe66f55aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:00.508139 systemd[1]: run-netns-cni\x2d1cb73432\x2d326e\x2d61dc\x2d3070\x2dfda1dbeca167.mount: Deactivated successfully. Jan 28 01:52:00.520603 containerd[1609]: time="2026-01-28T01:52:00.516347901Z" level=error msg="Failed to destroy network for sandbox \"bd0b4e8051c2fd6331f1e3c5b7259008a1091fc5dca7aa10ff83343432863f57\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:00.520603 containerd[1609]: time="2026-01-28T01:52:00.519817559Z" level=error msg="Failed to destroy network for sandbox \"28f4cd961c47fcd98fc9e7cfaed4b9c580329b6a6c47893a53daedee45d7cfa3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:00.589035 containerd[1609]: time="2026-01-28T01:52:00.588193795Z" level=error msg="Failed to destroy network for sandbox \"45bdcc590251af5a63b87d787a5fd9130c5ad06f49f9ddafa42bb02513bec3b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:00.590874 systemd[1]: run-netns-cni\x2dc890b1cb\x2ddcce\x2d9165\x2db245\x2d9e80ca32c9e3.mount: Deactivated successfully. Jan 28 01:52:00.591098 systemd[1]: run-netns-cni\x2d0b7728b2\x2df5f4\x2d8cab\x2d6c1c\x2deb449593ddb2.mount: Deactivated successfully. Jan 28 01:52:00.612611 systemd[1]: run-netns-cni\x2dc57770a9\x2d2af0\x2de1c1\x2d0af6\x2dd94d6b774ff9.mount: Deactivated successfully. Jan 28 01:52:00.671782 containerd[1609]: time="2026-01-28T01:52:00.671285039Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-849fc56f8-v9sqx,Uid:67371941-5272-4e0e-84ef-cf7de9065a57,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"720aedd222066267939ecc06b2432cc8ea24bf9dff5ddf819f55bcc662f7ba6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:00.674634 kubelet[2967]: E0128 01:52:00.674586 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"720aedd222066267939ecc06b2432cc8ea24bf9dff5ddf819f55bcc662f7ba6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:00.678805 kubelet[2967]: E0128 01:52:00.675786 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"720aedd222066267939ecc06b2432cc8ea24bf9dff5ddf819f55bcc662f7ba6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" Jan 28 01:52:00.678805 kubelet[2967]: E0128 01:52:00.675833 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"720aedd222066267939ecc06b2432cc8ea24bf9dff5ddf819f55bcc662f7ba6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" Jan 28 01:52:00.680731 kubelet[2967]: E0128 01:52:00.679010 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-849fc56f8-v9sqx_calico-system(67371941-5272-4e0e-84ef-cf7de9065a57)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-849fc56f8-v9sqx_calico-system(67371941-5272-4e0e-84ef-cf7de9065a57)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"720aedd222066267939ecc06b2432cc8ea24bf9dff5ddf819f55bcc662f7ba6a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" podUID="67371941-5272-4e0e-84ef-cf7de9065a57" Jan 28 01:52:00.726766 containerd[1609]: time="2026-01-28T01:52:00.725943680Z" level=error msg="Failed to destroy network for sandbox \"12070668e1b80ce6885573adf13057b5a0334a79baaf570ceefd858ea735dd70\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:00.783475 containerd[1609]: time="2026-01-28T01:52:00.775448430Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gcgtc,Uid:95f14950-b00b-4ddf-81a4-ed49d84ddcff,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f4e203ecdabbe706cd7c622830986e617be5a7955f88dde8a1a8acfba074cbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:00.783475 containerd[1609]: time="2026-01-28T01:52:00.780514115Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h25bw,Uid:0da3871e-a4b1-42ab-9e6b-d2183806355d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"72ab1a709879a23b4a7a1ce270d292c81e38a9cb9e3af02fc6f9d05fe66f55aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:00.783874 kubelet[2967]: E0128 01:52:00.780888 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f4e203ecdabbe706cd7c622830986e617be5a7955f88dde8a1a8acfba074cbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:00.783874 kubelet[2967]: E0128 01:52:00.780958 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f4e203ecdabbe706cd7c622830986e617be5a7955f88dde8a1a8acfba074cbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gcgtc" Jan 28 01:52:00.783874 kubelet[2967]: E0128 01:52:00.780984 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f4e203ecdabbe706cd7c622830986e617be5a7955f88dde8a1a8acfba074cbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gcgtc" Jan 28 01:52:00.784016 kubelet[2967]: E0128 01:52:00.781041 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-gcgtc_kube-system(95f14950-b00b-4ddf-81a4-ed49d84ddcff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-gcgtc_kube-system(95f14950-b00b-4ddf-81a4-ed49d84ddcff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f4e203ecdabbe706cd7c622830986e617be5a7955f88dde8a1a8acfba074cbd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-gcgtc" podUID="95f14950-b00b-4ddf-81a4-ed49d84ddcff" Jan 28 01:52:00.784016 kubelet[2967]: E0128 01:52:00.783013 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72ab1a709879a23b4a7a1ce270d292c81e38a9cb9e3af02fc6f9d05fe66f55aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:00.791735 containerd[1609]: time="2026-01-28T01:52:00.788731092Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654b4ddbfd-mbn64,Uid:ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd0b4e8051c2fd6331f1e3c5b7259008a1091fc5dca7aa10ff83343432863f57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:00.810629 containerd[1609]: time="2026-01-28T01:52:00.802606575Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654b4ddbfd-mgclm,Uid:3ef171ed-8146-4d6a-9063-eb31677aa1d4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"28f4cd961c47fcd98fc9e7cfaed4b9c580329b6a6c47893a53daedee45d7cfa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:00.815298 kubelet[2967]: E0128 01:52:00.811099 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd0b4e8051c2fd6331f1e3c5b7259008a1091fc5dca7aa10ff83343432863f57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:00.815298 kubelet[2967]: E0128 01:52:00.811196 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd0b4e8051c2fd6331f1e3c5b7259008a1091fc5dca7aa10ff83343432863f57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" Jan 28 01:52:00.815298 kubelet[2967]: E0128 01:52:00.813355 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd0b4e8051c2fd6331f1e3c5b7259008a1091fc5dca7aa10ff83343432863f57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" Jan 28 01:52:00.815530 kubelet[2967]: E0128 01:52:00.813423 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-654b4ddbfd-mbn64_calico-apiserver(ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-654b4ddbfd-mbn64_calico-apiserver(ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd0b4e8051c2fd6331f1e3c5b7259008a1091fc5dca7aa10ff83343432863f57\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" podUID="ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9" Jan 28 01:52:00.815530 kubelet[2967]: E0128 01:52:00.813865 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28f4cd961c47fcd98fc9e7cfaed4b9c580329b6a6c47893a53daedee45d7cfa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:00.815530 kubelet[2967]: E0128 01:52:00.813900 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28f4cd961c47fcd98fc9e7cfaed4b9c580329b6a6c47893a53daedee45d7cfa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" Jan 28 01:52:00.815815 kubelet[2967]: E0128 01:52:00.813922 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28f4cd961c47fcd98fc9e7cfaed4b9c580329b6a6c47893a53daedee45d7cfa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" Jan 28 01:52:00.815815 kubelet[2967]: E0128 01:52:00.813960 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-654b4ddbfd-mgclm_calico-apiserver(3ef171ed-8146-4d6a-9063-eb31677aa1d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-654b4ddbfd-mgclm_calico-apiserver(3ef171ed-8146-4d6a-9063-eb31677aa1d4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28f4cd961c47fcd98fc9e7cfaed4b9c580329b6a6c47893a53daedee45d7cfa3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" podUID="3ef171ed-8146-4d6a-9063-eb31677aa1d4" Jan 28 01:52:00.815815 kubelet[2967]: E0128 01:52:00.814312 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72ab1a709879a23b4a7a1ce270d292c81e38a9cb9e3af02fc6f9d05fe66f55aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-h25bw" Jan 28 01:52:00.816004 kubelet[2967]: E0128 01:52:00.814342 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72ab1a709879a23b4a7a1ce270d292c81e38a9cb9e3af02fc6f9d05fe66f55aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-h25bw" Jan 28 01:52:00.816004 kubelet[2967]: E0128 01:52:00.814380 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-h25bw_kube-system(0da3871e-a4b1-42ab-9e6b-d2183806355d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-h25bw_kube-system(0da3871e-a4b1-42ab-9e6b-d2183806355d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72ab1a709879a23b4a7a1ce270d292c81e38a9cb9e3af02fc6f9d05fe66f55aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-h25bw" podUID="0da3871e-a4b1-42ab-9e6b-d2183806355d" Jan 28 01:52:00.831569 containerd[1609]: time="2026-01-28T01:52:00.829054965Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f4986c4f8-cxtwp,Uid:7c0bf93b-f071-4ad6-aeca-bf378e20fc97,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"45bdcc590251af5a63b87d787a5fd9130c5ad06f49f9ddafa42bb02513bec3b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:00.832534 kubelet[2967]: E0128 01:52:00.829472 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45bdcc590251af5a63b87d787a5fd9130c5ad06f49f9ddafa42bb02513bec3b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:00.832534 kubelet[2967]: E0128 01:52:00.829538 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45bdcc590251af5a63b87d787a5fd9130c5ad06f49f9ddafa42bb02513bec3b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f4986c4f8-cxtwp" Jan 28 01:52:00.832534 kubelet[2967]: E0128 01:52:00.829562 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45bdcc590251af5a63b87d787a5fd9130c5ad06f49f9ddafa42bb02513bec3b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f4986c4f8-cxtwp" Jan 28 01:52:00.834479 kubelet[2967]: E0128 01:52:00.829627 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5f4986c4f8-cxtwp_calico-system(7c0bf93b-f071-4ad6-aeca-bf378e20fc97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5f4986c4f8-cxtwp_calico-system(7c0bf93b-f071-4ad6-aeca-bf378e20fc97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45bdcc590251af5a63b87d787a5fd9130c5ad06f49f9ddafa42bb02513bec3b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5f4986c4f8-cxtwp" podUID="7c0bf93b-f071-4ad6-aeca-bf378e20fc97" Jan 28 01:52:00.896445 containerd[1609]: time="2026-01-28T01:52:00.894409830Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nv2sz,Uid:be8a6b52-634d-45dc-a492-0c042b64c6df,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"12070668e1b80ce6885573adf13057b5a0334a79baaf570ceefd858ea735dd70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:00.902357 kubelet[2967]: E0128 01:52:00.901630 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12070668e1b80ce6885573adf13057b5a0334a79baaf570ceefd858ea735dd70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:00.902357 kubelet[2967]: E0128 01:52:00.901928 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12070668e1b80ce6885573adf13057b5a0334a79baaf570ceefd858ea735dd70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-nv2sz" Jan 28 01:52:00.902357 kubelet[2967]: E0128 01:52:00.902058 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12070668e1b80ce6885573adf13057b5a0334a79baaf570ceefd858ea735dd70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-nv2sz" Jan 28 01:52:00.902595 kubelet[2967]: E0128 01:52:00.902478 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-nv2sz_calico-system(be8a6b52-634d-45dc-a492-0c042b64c6df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-nv2sz_calico-system(be8a6b52-634d-45dc-a492-0c042b64c6df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12070668e1b80ce6885573adf13057b5a0334a79baaf570ceefd858ea735dd70\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-nv2sz" podUID="be8a6b52-634d-45dc-a492-0c042b64c6df" Jan 28 01:52:01.273648 systemd[1]: run-netns-cni\x2d34053e77\x2d1863\x2df3ae\x2dc71e\x2d65f24ca9c3a0.mount: Deactivated successfully. Jan 28 01:52:02.261167 containerd[1609]: time="2026-01-28T01:52:02.258867885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ms9md,Uid:d33e070d-1851-4242-98ee-97e68b203245,Namespace:calico-system,Attempt:0,}" Jan 28 01:52:03.033789 containerd[1609]: time="2026-01-28T01:52:03.017435880Z" level=error msg="Failed to destroy network for sandbox \"ed76de3503f3a34fb9fa06cf68d9393aabe84b4e73dcc1f63d4556fb49487699\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:03.023970 systemd[1]: run-netns-cni\x2d67d531ed\x2d84eb\x2d279d\x2d5064\x2d519dddc7a80d.mount: Deactivated successfully. Jan 28 01:52:03.087470 containerd[1609]: time="2026-01-28T01:52:03.084295941Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ms9md,Uid:d33e070d-1851-4242-98ee-97e68b203245,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed76de3503f3a34fb9fa06cf68d9393aabe84b4e73dcc1f63d4556fb49487699\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:03.088229 kubelet[2967]: E0128 01:52:03.084538 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed76de3503f3a34fb9fa06cf68d9393aabe84b4e73dcc1f63d4556fb49487699\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:03.088229 kubelet[2967]: E0128 01:52:03.084606 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed76de3503f3a34fb9fa06cf68d9393aabe84b4e73dcc1f63d4556fb49487699\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ms9md" Jan 28 01:52:03.088229 kubelet[2967]: E0128 01:52:03.084640 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed76de3503f3a34fb9fa06cf68d9393aabe84b4e73dcc1f63d4556fb49487699\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ms9md" Jan 28 01:52:03.088886 kubelet[2967]: E0128 01:52:03.084850 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ms9md_calico-system(d33e070d-1851-4242-98ee-97e68b203245)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ms9md_calico-system(d33e070d-1851-4242-98ee-97e68b203245)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed76de3503f3a34fb9fa06cf68d9393aabe84b4e73dcc1f63d4556fb49487699\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:52:12.245312 containerd[1609]: time="2026-01-28T01:52:12.237173035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654b4ddbfd-mgclm,Uid:3ef171ed-8146-4d6a-9063-eb31677aa1d4,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:52:12.263252 containerd[1609]: time="2026-01-28T01:52:12.263207269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-849fc56f8-v9sqx,Uid:67371941-5272-4e0e-84ef-cf7de9065a57,Namespace:calico-system,Attempt:0,}" Jan 28 01:52:13.188730 kubelet[2967]: E0128 01:52:13.185924 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:52:13.219235 containerd[1609]: time="2026-01-28T01:52:13.219173161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f4986c4f8-cxtwp,Uid:7c0bf93b-f071-4ad6-aeca-bf378e20fc97,Namespace:calico-system,Attempt:0,}" Jan 28 01:52:13.254957 containerd[1609]: time="2026-01-28T01:52:13.235156990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gcgtc,Uid:95f14950-b00b-4ddf-81a4-ed49d84ddcff,Namespace:kube-system,Attempt:0,}" Jan 28 01:52:13.425498 containerd[1609]: time="2026-01-28T01:52:13.425313366Z" level=error msg="Failed to destroy network for sandbox \"73c8bd7877b7ad44ec42ea2aec4d60cfa383cd6b0faedda50514ecf9996d5ce5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:13.431288 systemd[1]: run-netns-cni\x2d6572a3bf\x2d1954\x2db5f0\x2df0b5\x2dedb0f15df248.mount: Deactivated successfully. Jan 28 01:52:13.514567 containerd[1609]: time="2026-01-28T01:52:13.513866597Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-849fc56f8-v9sqx,Uid:67371941-5272-4e0e-84ef-cf7de9065a57,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"73c8bd7877b7ad44ec42ea2aec4d60cfa383cd6b0faedda50514ecf9996d5ce5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:13.514885 kubelet[2967]: E0128 01:52:13.514130 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73c8bd7877b7ad44ec42ea2aec4d60cfa383cd6b0faedda50514ecf9996d5ce5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:13.514885 kubelet[2967]: E0128 01:52:13.514200 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73c8bd7877b7ad44ec42ea2aec4d60cfa383cd6b0faedda50514ecf9996d5ce5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" Jan 28 01:52:13.514885 kubelet[2967]: E0128 01:52:13.514227 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73c8bd7877b7ad44ec42ea2aec4d60cfa383cd6b0faedda50514ecf9996d5ce5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" Jan 28 01:52:13.515054 kubelet[2967]: E0128 01:52:13.514287 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-849fc56f8-v9sqx_calico-system(67371941-5272-4e0e-84ef-cf7de9065a57)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-849fc56f8-v9sqx_calico-system(67371941-5272-4e0e-84ef-cf7de9065a57)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73c8bd7877b7ad44ec42ea2aec4d60cfa383cd6b0faedda50514ecf9996d5ce5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" podUID="67371941-5272-4e0e-84ef-cf7de9065a57" Jan 28 01:52:13.752869 containerd[1609]: time="2026-01-28T01:52:13.741756951Z" level=error msg="Failed to destroy network for sandbox \"243bf35983133a04ca93fc6230dfaaf72e0edb856a83c09226be4a44781e0da2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:13.782373 systemd[1]: run-netns-cni\x2d9f410944\x2d8e4c\x2da5fe\x2d4ba2\x2dd8af8d50a73a.mount: Deactivated successfully. Jan 28 01:52:13.833249 containerd[1609]: time="2026-01-28T01:52:13.831309167Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654b4ddbfd-mgclm,Uid:3ef171ed-8146-4d6a-9063-eb31677aa1d4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"243bf35983133a04ca93fc6230dfaaf72e0edb856a83c09226be4a44781e0da2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:13.834266 kubelet[2967]: E0128 01:52:13.833759 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"243bf35983133a04ca93fc6230dfaaf72e0edb856a83c09226be4a44781e0da2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:13.834266 kubelet[2967]: E0128 01:52:13.833834 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"243bf35983133a04ca93fc6230dfaaf72e0edb856a83c09226be4a44781e0da2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" Jan 28 01:52:13.834266 kubelet[2967]: E0128 01:52:13.833863 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"243bf35983133a04ca93fc6230dfaaf72e0edb856a83c09226be4a44781e0da2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" Jan 28 01:52:13.838564 kubelet[2967]: E0128 01:52:13.833964 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-654b4ddbfd-mgclm_calico-apiserver(3ef171ed-8146-4d6a-9063-eb31677aa1d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-654b4ddbfd-mgclm_calico-apiserver(3ef171ed-8146-4d6a-9063-eb31677aa1d4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"243bf35983133a04ca93fc6230dfaaf72e0edb856a83c09226be4a44781e0da2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" podUID="3ef171ed-8146-4d6a-9063-eb31677aa1d4" Jan 28 01:52:13.936308 containerd[1609]: time="2026-01-28T01:52:13.931337529Z" level=error msg="Failed to destroy network for sandbox \"2dacffb2ea13f74e29232004c16ac60be94b53339e1fa625aa4172b1702904a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:13.995318 systemd[1]: run-netns-cni\x2d20fbb86d\x2d113a\x2df239\x2d2fa2\x2dc2c702fad923.mount: Deactivated successfully. Jan 28 01:52:14.089018 containerd[1609]: time="2026-01-28T01:52:14.083422926Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gcgtc,Uid:95f14950-b00b-4ddf-81a4-ed49d84ddcff,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dacffb2ea13f74e29232004c16ac60be94b53339e1fa625aa4172b1702904a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:14.124232 kubelet[2967]: E0128 01:52:14.084052 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dacffb2ea13f74e29232004c16ac60be94b53339e1fa625aa4172b1702904a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:14.124232 kubelet[2967]: E0128 01:52:14.084168 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dacffb2ea13f74e29232004c16ac60be94b53339e1fa625aa4172b1702904a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gcgtc" Jan 28 01:52:14.124232 kubelet[2967]: E0128 01:52:14.084194 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dacffb2ea13f74e29232004c16ac60be94b53339e1fa625aa4172b1702904a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gcgtc" Jan 28 01:52:14.125184 kubelet[2967]: E0128 01:52:14.084257 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-gcgtc_kube-system(95f14950-b00b-4ddf-81a4-ed49d84ddcff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-gcgtc_kube-system(95f14950-b00b-4ddf-81a4-ed49d84ddcff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2dacffb2ea13f74e29232004c16ac60be94b53339e1fa625aa4172b1702904a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-gcgtc" podUID="95f14950-b00b-4ddf-81a4-ed49d84ddcff" Jan 28 01:52:14.318752 containerd[1609]: time="2026-01-28T01:52:14.311017972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ms9md,Uid:d33e070d-1851-4242-98ee-97e68b203245,Namespace:calico-system,Attempt:0,}" Jan 28 01:52:14.683723 containerd[1609]: time="2026-01-28T01:52:14.679897528Z" level=error msg="Failed to destroy network for sandbox \"86cd85b28f648a1dba74c0a882173324ad235865bef6b9ba19422ba0a434729d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:14.688247 systemd[1]: run-netns-cni\x2dc2df87fb\x2dbfd5\x2dc246\x2d54c2\x2dc946ac63ae22.mount: Deactivated successfully. Jan 28 01:52:14.783743 containerd[1609]: time="2026-01-28T01:52:14.782212202Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f4986c4f8-cxtwp,Uid:7c0bf93b-f071-4ad6-aeca-bf378e20fc97,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"86cd85b28f648a1dba74c0a882173324ad235865bef6b9ba19422ba0a434729d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:14.784029 kubelet[2967]: E0128 01:52:14.783090 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86cd85b28f648a1dba74c0a882173324ad235865bef6b9ba19422ba0a434729d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:14.784029 kubelet[2967]: E0128 01:52:14.783185 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86cd85b28f648a1dba74c0a882173324ad235865bef6b9ba19422ba0a434729d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f4986c4f8-cxtwp" Jan 28 01:52:14.784029 kubelet[2967]: E0128 01:52:14.783220 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86cd85b28f648a1dba74c0a882173324ad235865bef6b9ba19422ba0a434729d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f4986c4f8-cxtwp" Jan 28 01:52:14.784838 kubelet[2967]: E0128 01:52:14.783290 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5f4986c4f8-cxtwp_calico-system(7c0bf93b-f071-4ad6-aeca-bf378e20fc97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5f4986c4f8-cxtwp_calico-system(7c0bf93b-f071-4ad6-aeca-bf378e20fc97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86cd85b28f648a1dba74c0a882173324ad235865bef6b9ba19422ba0a434729d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5f4986c4f8-cxtwp" podUID="7c0bf93b-f071-4ad6-aeca-bf378e20fc97" Jan 28 01:52:15.200351 kubelet[2967]: E0128 01:52:15.199252 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:52:15.200553 containerd[1609]: time="2026-01-28T01:52:15.199263575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654b4ddbfd-mbn64,Uid:ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:52:15.235179 kubelet[2967]: E0128 01:52:15.233352 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:52:15.237585 containerd[1609]: time="2026-01-28T01:52:15.236510276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h25bw,Uid:0da3871e-a4b1-42ab-9e6b-d2183806355d,Namespace:kube-system,Attempt:0,}" Jan 28 01:52:15.433731 containerd[1609]: time="2026-01-28T01:52:15.431103405Z" level=error msg="Failed to destroy network for sandbox \"35df0080481b00cbfc150166a9ea096c604d42dfa9710d8d8a698a55dfcbe490\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:15.476043 systemd[1]: run-netns-cni\x2db718e364\x2dbde1\x2d5fcd\x2d77ee\x2d7893a3e061c6.mount: Deactivated successfully. Jan 28 01:52:15.541910 containerd[1609]: time="2026-01-28T01:52:15.541723014Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ms9md,Uid:d33e070d-1851-4242-98ee-97e68b203245,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"35df0080481b00cbfc150166a9ea096c604d42dfa9710d8d8a698a55dfcbe490\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:15.559943 kubelet[2967]: E0128 01:52:15.556463 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35df0080481b00cbfc150166a9ea096c604d42dfa9710d8d8a698a55dfcbe490\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:15.559943 kubelet[2967]: E0128 01:52:15.556549 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35df0080481b00cbfc150166a9ea096c604d42dfa9710d8d8a698a55dfcbe490\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ms9md" Jan 28 01:52:15.559943 kubelet[2967]: E0128 01:52:15.556591 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35df0080481b00cbfc150166a9ea096c604d42dfa9710d8d8a698a55dfcbe490\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ms9md" Jan 28 01:52:15.560256 kubelet[2967]: E0128 01:52:15.556661 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ms9md_calico-system(d33e070d-1851-4242-98ee-97e68b203245)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ms9md_calico-system(d33e070d-1851-4242-98ee-97e68b203245)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35df0080481b00cbfc150166a9ea096c604d42dfa9710d8d8a698a55dfcbe490\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:52:16.089629 containerd[1609]: time="2026-01-28T01:52:16.089461035Z" level=error msg="Failed to destroy network for sandbox \"e20ceb17f7b07b08c23d78ed0715bb908a4d4a7fc266f5f3d7e6694326520def\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:16.132585 systemd[1]: run-netns-cni\x2ddc505486\x2d1aa6\x2dea27\x2df1d2\x2dd786caa2903c.mount: Deactivated successfully. Jan 28 01:52:16.201750 containerd[1609]: time="2026-01-28T01:52:16.193274643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nv2sz,Uid:be8a6b52-634d-45dc-a492-0c042b64c6df,Namespace:calico-system,Attempt:0,}" Jan 28 01:52:16.219793 containerd[1609]: time="2026-01-28T01:52:16.219335400Z" level=error msg="Failed to destroy network for sandbox \"67fa2625fcdbb44c335ac276e77e9fc4b9a869250642ed8393ce62f50cb5ab42\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:16.240932 systemd[1]: run-netns-cni\x2d78203f40\x2ddc8d\x2d2cbc\x2d2099\x2df14aab750190.mount: Deactivated successfully. Jan 28 01:52:16.264835 containerd[1609]: time="2026-01-28T01:52:16.262211949Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654b4ddbfd-mbn64,Uid:ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e20ceb17f7b07b08c23d78ed0715bb908a4d4a7fc266f5f3d7e6694326520def\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:16.277861 kubelet[2967]: E0128 01:52:16.268655 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e20ceb17f7b07b08c23d78ed0715bb908a4d4a7fc266f5f3d7e6694326520def\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:16.277861 kubelet[2967]: E0128 01:52:16.272247 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e20ceb17f7b07b08c23d78ed0715bb908a4d4a7fc266f5f3d7e6694326520def\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" Jan 28 01:52:16.277861 kubelet[2967]: E0128 01:52:16.272279 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e20ceb17f7b07b08c23d78ed0715bb908a4d4a7fc266f5f3d7e6694326520def\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" Jan 28 01:52:16.282258 kubelet[2967]: E0128 01:52:16.272358 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-654b4ddbfd-mbn64_calico-apiserver(ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-654b4ddbfd-mbn64_calico-apiserver(ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e20ceb17f7b07b08c23d78ed0715bb908a4d4a7fc266f5f3d7e6694326520def\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" podUID="ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9" Jan 28 01:52:16.331547 containerd[1609]: time="2026-01-28T01:52:16.330789905Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h25bw,Uid:0da3871e-a4b1-42ab-9e6b-d2183806355d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"67fa2625fcdbb44c335ac276e77e9fc4b9a869250642ed8393ce62f50cb5ab42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:16.336197 kubelet[2967]: E0128 01:52:16.333599 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67fa2625fcdbb44c335ac276e77e9fc4b9a869250642ed8393ce62f50cb5ab42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:16.336197 kubelet[2967]: E0128 01:52:16.335620 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67fa2625fcdbb44c335ac276e77e9fc4b9a869250642ed8393ce62f50cb5ab42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-h25bw" Jan 28 01:52:16.336197 kubelet[2967]: E0128 01:52:16.335768 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67fa2625fcdbb44c335ac276e77e9fc4b9a869250642ed8393ce62f50cb5ab42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-h25bw" Jan 28 01:52:16.336447 kubelet[2967]: E0128 01:52:16.336032 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-h25bw_kube-system(0da3871e-a4b1-42ab-9e6b-d2183806355d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-h25bw_kube-system(0da3871e-a4b1-42ab-9e6b-d2183806355d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67fa2625fcdbb44c335ac276e77e9fc4b9a869250642ed8393ce62f50cb5ab42\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-h25bw" podUID="0da3871e-a4b1-42ab-9e6b-d2183806355d" Jan 28 01:52:17.037197 containerd[1609]: time="2026-01-28T01:52:17.035936816Z" level=error msg="Failed to destroy network for sandbox \"e0c1fdf7268e2db734400525ca3b63ea77ef49a20a5d9264e65698fc955bec15\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:17.053481 systemd[1]: run-netns-cni\x2d5d3fc15c\x2df942\x2de49c\x2dee13\x2de9264d508ff1.mount: Deactivated successfully. Jan 28 01:52:17.114163 containerd[1609]: time="2026-01-28T01:52:17.114005985Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nv2sz,Uid:be8a6b52-634d-45dc-a492-0c042b64c6df,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0c1fdf7268e2db734400525ca3b63ea77ef49a20a5d9264e65698fc955bec15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:17.114564 kubelet[2967]: E0128 01:52:17.114451 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0c1fdf7268e2db734400525ca3b63ea77ef49a20a5d9264e65698fc955bec15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:17.114769 kubelet[2967]: E0128 01:52:17.114573 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0c1fdf7268e2db734400525ca3b63ea77ef49a20a5d9264e65698fc955bec15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-nv2sz" Jan 28 01:52:17.114769 kubelet[2967]: E0128 01:52:17.114607 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0c1fdf7268e2db734400525ca3b63ea77ef49a20a5d9264e65698fc955bec15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-nv2sz" Jan 28 01:52:17.116791 kubelet[2967]: E0128 01:52:17.115503 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-nv2sz_calico-system(be8a6b52-634d-45dc-a492-0c042b64c6df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-nv2sz_calico-system(be8a6b52-634d-45dc-a492-0c042b64c6df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0c1fdf7268e2db734400525ca3b63ea77ef49a20a5d9264e65698fc955bec15\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-nv2sz" podUID="be8a6b52-634d-45dc-a492-0c042b64c6df" Jan 28 01:52:22.189659 kubelet[2967]: E0128 01:52:22.186620 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:52:24.191924 kubelet[2967]: E0128 01:52:24.191581 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:52:26.332999 containerd[1609]: time="2026-01-28T01:52:26.325992430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654b4ddbfd-mgclm,Uid:3ef171ed-8146-4d6a-9063-eb31677aa1d4,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:52:27.145442 containerd[1609]: time="2026-01-28T01:52:27.145250550Z" level=error msg="Failed to destroy network for sandbox \"3db89c83012848ba2f016d03f266b36c99adfca622039ca3a530e22ee563cec4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:27.167897 systemd[1]: run-netns-cni\x2d30e6f46c\x2d3f13\x2d9bfb\x2df4e8\x2da333851730dd.mount: Deactivated successfully. Jan 28 01:52:27.187133 containerd[1609]: time="2026-01-28T01:52:27.186979007Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654b4ddbfd-mgclm,Uid:3ef171ed-8146-4d6a-9063-eb31677aa1d4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3db89c83012848ba2f016d03f266b36c99adfca622039ca3a530e22ee563cec4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:27.188962 kubelet[2967]: E0128 01:52:27.187613 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3db89c83012848ba2f016d03f266b36c99adfca622039ca3a530e22ee563cec4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:27.188962 kubelet[2967]: E0128 01:52:27.187802 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3db89c83012848ba2f016d03f266b36c99adfca622039ca3a530e22ee563cec4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" Jan 28 01:52:27.188962 kubelet[2967]: E0128 01:52:27.187834 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3db89c83012848ba2f016d03f266b36c99adfca622039ca3a530e22ee563cec4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" Jan 28 01:52:27.189543 kubelet[2967]: E0128 01:52:27.187893 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-654b4ddbfd-mgclm_calico-apiserver(3ef171ed-8146-4d6a-9063-eb31677aa1d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-654b4ddbfd-mgclm_calico-apiserver(3ef171ed-8146-4d6a-9063-eb31677aa1d4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3db89c83012848ba2f016d03f266b36c99adfca622039ca3a530e22ee563cec4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" podUID="3ef171ed-8146-4d6a-9063-eb31677aa1d4" Jan 28 01:52:27.213220 containerd[1609]: time="2026-01-28T01:52:27.211282957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ms9md,Uid:d33e070d-1851-4242-98ee-97e68b203245,Namespace:calico-system,Attempt:0,}" Jan 28 01:52:28.023413 containerd[1609]: time="2026-01-28T01:52:28.020057072Z" level=error msg="Failed to destroy network for sandbox \"ddfd056b0c8af2fe820e82a7ad4f463cc65208d4be412fb86562f351470aa1e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:28.053045 systemd[1]: run-netns-cni\x2db17a9b90\x2d41a0\x2d485d\x2d5841\x2d2ef1f6c81aeb.mount: Deactivated successfully. Jan 28 01:52:28.156185 containerd[1609]: time="2026-01-28T01:52:28.155851850Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ms9md,Uid:d33e070d-1851-4242-98ee-97e68b203245,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddfd056b0c8af2fe820e82a7ad4f463cc65208d4be412fb86562f351470aa1e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:28.289812 kubelet[2967]: E0128 01:52:28.282763 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddfd056b0c8af2fe820e82a7ad4f463cc65208d4be412fb86562f351470aa1e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:28.289812 kubelet[2967]: E0128 01:52:28.283494 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddfd056b0c8af2fe820e82a7ad4f463cc65208d4be412fb86562f351470aa1e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ms9md" Jan 28 01:52:28.289812 kubelet[2967]: E0128 01:52:28.283548 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddfd056b0c8af2fe820e82a7ad4f463cc65208d4be412fb86562f351470aa1e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ms9md" Jan 28 01:52:28.293168 kubelet[2967]: E0128 01:52:28.283881 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ms9md_calico-system(d33e070d-1851-4242-98ee-97e68b203245)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ms9md_calico-system(d33e070d-1851-4242-98ee-97e68b203245)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ddfd056b0c8af2fe820e82a7ad4f463cc65208d4be412fb86562f351470aa1e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:52:28.588962 containerd[1609]: time="2026-01-28T01:52:28.583576397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654b4ddbfd-mbn64,Uid:ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:52:28.600602 containerd[1609]: time="2026-01-28T01:52:28.590895266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-849fc56f8-v9sqx,Uid:67371941-5272-4e0e-84ef-cf7de9065a57,Namespace:calico-system,Attempt:0,}" Jan 28 01:52:29.189181 kubelet[2967]: E0128 01:52:29.187621 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:52:29.194125 containerd[1609]: time="2026-01-28T01:52:29.192891023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gcgtc,Uid:95f14950-b00b-4ddf-81a4-ed49d84ddcff,Namespace:kube-system,Attempt:0,}" Jan 28 01:52:29.363297 containerd[1609]: time="2026-01-28T01:52:29.363093497Z" level=error msg="Failed to destroy network for sandbox \"07058369587dd6f17ed24a33d1c1e493d40e1c7a5c094d080e0ae5283761f1e2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:29.377080 systemd[1]: run-netns-cni\x2d0fa57b23\x2dbbed\x2dac87\x2d78ff\x2dbb26383bd42e.mount: Deactivated successfully. Jan 28 01:52:29.401886 containerd[1609]: time="2026-01-28T01:52:29.398168615Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654b4ddbfd-mbn64,Uid:ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"07058369587dd6f17ed24a33d1c1e493d40e1c7a5c094d080e0ae5283761f1e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:29.402195 kubelet[2967]: E0128 01:52:29.400501 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07058369587dd6f17ed24a33d1c1e493d40e1c7a5c094d080e0ae5283761f1e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:29.402195 kubelet[2967]: E0128 01:52:29.401234 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07058369587dd6f17ed24a33d1c1e493d40e1c7a5c094d080e0ae5283761f1e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" Jan 28 01:52:29.402195 kubelet[2967]: E0128 01:52:29.401391 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07058369587dd6f17ed24a33d1c1e493d40e1c7a5c094d080e0ae5283761f1e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" Jan 28 01:52:29.407662 kubelet[2967]: E0128 01:52:29.405606 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-654b4ddbfd-mbn64_calico-apiserver(ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-654b4ddbfd-mbn64_calico-apiserver(ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"07058369587dd6f17ed24a33d1c1e493d40e1c7a5c094d080e0ae5283761f1e2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" podUID="ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9" Jan 28 01:52:29.702979 containerd[1609]: time="2026-01-28T01:52:29.698279743Z" level=error msg="Failed to destroy network for sandbox \"1b3edeb017b0281f39545efdd7acae92fe863fb53f55b24908e1a76b37ee291e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:29.745558 systemd[1]: run-netns-cni\x2d714d4e54\x2d2afc\x2d1104\x2d0a0f\x2dcdaac15bf0c4.mount: Deactivated successfully. Jan 28 01:52:29.788068 containerd[1609]: time="2026-01-28T01:52:29.787867483Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-849fc56f8-v9sqx,Uid:67371941-5272-4e0e-84ef-cf7de9065a57,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b3edeb017b0281f39545efdd7acae92fe863fb53f55b24908e1a76b37ee291e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:29.805094 kubelet[2967]: E0128 01:52:29.796799 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b3edeb017b0281f39545efdd7acae92fe863fb53f55b24908e1a76b37ee291e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:29.805094 kubelet[2967]: E0128 01:52:29.798791 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b3edeb017b0281f39545efdd7acae92fe863fb53f55b24908e1a76b37ee291e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" Jan 28 01:52:29.805094 kubelet[2967]: E0128 01:52:29.799053 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b3edeb017b0281f39545efdd7acae92fe863fb53f55b24908e1a76b37ee291e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" Jan 28 01:52:29.806747 kubelet[2967]: E0128 01:52:29.799614 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-849fc56f8-v9sqx_calico-system(67371941-5272-4e0e-84ef-cf7de9065a57)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-849fc56f8-v9sqx_calico-system(67371941-5272-4e0e-84ef-cf7de9065a57)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b3edeb017b0281f39545efdd7acae92fe863fb53f55b24908e1a76b37ee291e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" podUID="67371941-5272-4e0e-84ef-cf7de9065a57" Jan 28 01:52:29.934365 containerd[1609]: time="2026-01-28T01:52:29.934107165Z" level=error msg="Failed to destroy network for sandbox \"fac91cc1fd64065237477862ca3838bcfb43e63468882544ec8d70f362a2157f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:29.943246 systemd[1]: run-netns-cni\x2d9ff498c7\x2d0c87\x2dd1d0\x2d0e5f\x2d56b566759ed7.mount: Deactivated successfully. Jan 28 01:52:30.019996 containerd[1609]: time="2026-01-28T01:52:30.015193867Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gcgtc,Uid:95f14950-b00b-4ddf-81a4-ed49d84ddcff,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fac91cc1fd64065237477862ca3838bcfb43e63468882544ec8d70f362a2157f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:30.020273 kubelet[2967]: E0128 01:52:30.015831 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fac91cc1fd64065237477862ca3838bcfb43e63468882544ec8d70f362a2157f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:30.020273 kubelet[2967]: E0128 01:52:30.015950 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fac91cc1fd64065237477862ca3838bcfb43e63468882544ec8d70f362a2157f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gcgtc" Jan 28 01:52:30.020273 kubelet[2967]: E0128 01:52:30.016075 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fac91cc1fd64065237477862ca3838bcfb43e63468882544ec8d70f362a2157f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gcgtc" Jan 28 01:52:30.020416 kubelet[2967]: E0128 01:52:30.016148 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-gcgtc_kube-system(95f14950-b00b-4ddf-81a4-ed49d84ddcff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-gcgtc_kube-system(95f14950-b00b-4ddf-81a4-ed49d84ddcff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fac91cc1fd64065237477862ca3838bcfb43e63468882544ec8d70f362a2157f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-gcgtc" podUID="95f14950-b00b-4ddf-81a4-ed49d84ddcff" Jan 28 01:52:30.207392 kubelet[2967]: E0128 01:52:30.204402 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:52:30.222954 containerd[1609]: time="2026-01-28T01:52:30.222551238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nv2sz,Uid:be8a6b52-634d-45dc-a492-0c042b64c6df,Namespace:calico-system,Attempt:0,}" Jan 28 01:52:30.226658 containerd[1609]: time="2026-01-28T01:52:30.226590355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f4986c4f8-cxtwp,Uid:7c0bf93b-f071-4ad6-aeca-bf378e20fc97,Namespace:calico-system,Attempt:0,}" Jan 28 01:52:30.231002 containerd[1609]: time="2026-01-28T01:52:30.230966754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h25bw,Uid:0da3871e-a4b1-42ab-9e6b-d2183806355d,Namespace:kube-system,Attempt:0,}" Jan 28 01:52:30.802786 containerd[1609]: time="2026-01-28T01:52:30.801172671Z" level=error msg="Failed to destroy network for sandbox \"2460cf89f33b6ae3d2f989882a1adf8ba35f2e45810013c10cfe811844959309\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:30.815127 systemd[1]: run-netns-cni\x2d7f351dd4\x2d88f2\x2d6659\x2dd2ce\x2d3e0b23e5cd95.mount: Deactivated successfully. Jan 28 01:52:30.835552 containerd[1609]: time="2026-01-28T01:52:30.831175958Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f4986c4f8-cxtwp,Uid:7c0bf93b-f071-4ad6-aeca-bf378e20fc97,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2460cf89f33b6ae3d2f989882a1adf8ba35f2e45810013c10cfe811844959309\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:30.835877 kubelet[2967]: E0128 01:52:30.831455 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2460cf89f33b6ae3d2f989882a1adf8ba35f2e45810013c10cfe811844959309\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:30.835877 kubelet[2967]: E0128 01:52:30.831545 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2460cf89f33b6ae3d2f989882a1adf8ba35f2e45810013c10cfe811844959309\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f4986c4f8-cxtwp" Jan 28 01:52:30.835877 kubelet[2967]: E0128 01:52:30.831580 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2460cf89f33b6ae3d2f989882a1adf8ba35f2e45810013c10cfe811844959309\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f4986c4f8-cxtwp" Jan 28 01:52:30.842503 kubelet[2967]: E0128 01:52:30.831640 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5f4986c4f8-cxtwp_calico-system(7c0bf93b-f071-4ad6-aeca-bf378e20fc97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5f4986c4f8-cxtwp_calico-system(7c0bf93b-f071-4ad6-aeca-bf378e20fc97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2460cf89f33b6ae3d2f989882a1adf8ba35f2e45810013c10cfe811844959309\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5f4986c4f8-cxtwp" podUID="7c0bf93b-f071-4ad6-aeca-bf378e20fc97" Jan 28 01:52:30.894488 containerd[1609]: time="2026-01-28T01:52:30.890088716Z" level=error msg="Failed to destroy network for sandbox \"0d5ad8a3a0baccf0161cfa97885c1dd05b24708028ed437ea9e6ef189a8405c8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:30.908155 systemd[1]: run-netns-cni\x2d51ce20f7\x2d55fc\x2d18c6\x2df603\x2dbe852c625c31.mount: Deactivated successfully. Jan 28 01:52:30.927444 containerd[1609]: time="2026-01-28T01:52:30.927298495Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nv2sz,Uid:be8a6b52-634d-45dc-a492-0c042b64c6df,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d5ad8a3a0baccf0161cfa97885c1dd05b24708028ed437ea9e6ef189a8405c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:30.929661 kubelet[2967]: E0128 01:52:30.929517 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d5ad8a3a0baccf0161cfa97885c1dd05b24708028ed437ea9e6ef189a8405c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:30.929855 kubelet[2967]: E0128 01:52:30.929748 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d5ad8a3a0baccf0161cfa97885c1dd05b24708028ed437ea9e6ef189a8405c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-nv2sz" Jan 28 01:52:30.929855 kubelet[2967]: E0128 01:52:30.929781 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d5ad8a3a0baccf0161cfa97885c1dd05b24708028ed437ea9e6ef189a8405c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-nv2sz" Jan 28 01:52:30.930022 kubelet[2967]: E0128 01:52:30.929841 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-nv2sz_calico-system(be8a6b52-634d-45dc-a492-0c042b64c6df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-nv2sz_calico-system(be8a6b52-634d-45dc-a492-0c042b64c6df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d5ad8a3a0baccf0161cfa97885c1dd05b24708028ed437ea9e6ef189a8405c8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-nv2sz" podUID="be8a6b52-634d-45dc-a492-0c042b64c6df" Jan 28 01:52:30.941139 containerd[1609]: time="2026-01-28T01:52:30.940879247Z" level=error msg="Failed to destroy network for sandbox \"749cd7085b2b56f789c181468f65994a24152a89e6d0fb0d3a0ba8bd08923389\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:30.969140 systemd[1]: run-netns-cni\x2df4aa50ca\x2d208e\x2de2a7\x2d4b8b\x2df188e968edc6.mount: Deactivated successfully. Jan 28 01:52:31.004368 containerd[1609]: time="2026-01-28T01:52:31.004256466Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h25bw,Uid:0da3871e-a4b1-42ab-9e6b-d2183806355d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"749cd7085b2b56f789c181468f65994a24152a89e6d0fb0d3a0ba8bd08923389\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:31.004619 kubelet[2967]: E0128 01:52:31.004567 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"749cd7085b2b56f789c181468f65994a24152a89e6d0fb0d3a0ba8bd08923389\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:31.004789 kubelet[2967]: E0128 01:52:31.004634 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"749cd7085b2b56f789c181468f65994a24152a89e6d0fb0d3a0ba8bd08923389\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-h25bw" Jan 28 01:52:31.004844 kubelet[2967]: E0128 01:52:31.004796 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"749cd7085b2b56f789c181468f65994a24152a89e6d0fb0d3a0ba8bd08923389\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-h25bw" Jan 28 01:52:31.011196 kubelet[2967]: E0128 01:52:31.010805 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-h25bw_kube-system(0da3871e-a4b1-42ab-9e6b-d2183806355d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-h25bw_kube-system(0da3871e-a4b1-42ab-9e6b-d2183806355d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"749cd7085b2b56f789c181468f65994a24152a89e6d0fb0d3a0ba8bd08923389\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-h25bw" podUID="0da3871e-a4b1-42ab-9e6b-d2183806355d" Jan 28 01:52:31.191564 kubelet[2967]: E0128 01:52:31.189219 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:52:36.268284 kernel: audit: type=1130 audit(1769565156.248:613): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.85:22-10.0.0.1:33718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:52:36.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.85:22-10.0.0.1:33718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:52:36.249099 systemd[1]: Started sshd@7-10.0.0.85:22-10.0.0.1:33718.service - OpenSSH per-connection server daemon (10.0.0.1:33718). Jan 28 01:52:37.095000 audit[4655]: USER_ACCT pid=4655 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:37.104307 sshd[4655]: Accepted publickey for core from 10.0.0.1 port 33718 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:52:37.116488 sshd-session[4655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:52:37.110000 audit[4655]: CRED_ACQ pid=4655 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:37.142077 kernel: audit: type=1101 audit(1769565157.095:614): pid=4655 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:37.142207 kernel: audit: type=1103 audit(1769565157.110:615): pid=4655 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:37.142251 kernel: audit: type=1006 audit(1769565157.110:616): pid=4655 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 28 01:52:37.156071 kernel: audit: type=1300 audit(1769565157.110:616): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff5a3430f0 a2=3 a3=0 items=0 ppid=1 pid=4655 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:52:37.110000 audit[4655]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff5a3430f0 a2=3 a3=0 items=0 ppid=1 pid=4655 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:52:37.164052 systemd-logind[1586]: New session 9 of user core. Jan 28 01:52:37.170132 kernel: audit: type=1327 audit(1769565157.110:616): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:52:37.110000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:52:37.196530 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 28 01:52:37.219000 audit[4655]: USER_START pid=4655 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:37.312241 kernel: audit: type=1105 audit(1769565157.219:617): pid=4655 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:37.312460 kernel: audit: type=1103 audit(1769565157.233:618): pid=4659 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:37.233000 audit[4659]: CRED_ACQ pid=4659 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:38.090238 sshd[4659]: Connection closed by 10.0.0.1 port 33718 Jan 28 01:52:38.094000 audit[4655]: USER_END pid=4655 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:38.091312 sshd-session[4655]: pam_unix(sshd:session): session closed for user core Jan 28 01:52:38.169643 kernel: audit: type=1106 audit(1769565158.094:619): pid=4655 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:38.170519 systemd[1]: sshd@7-10.0.0.85:22-10.0.0.1:33718.service: Deactivated successfully. Jan 28 01:52:38.094000 audit[4655]: CRED_DISP pid=4655 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:38.238118 kernel: audit: type=1104 audit(1769565158.094:620): pid=4655 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:38.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.85:22-10.0.0.1:33718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:52:38.318481 systemd[1]: session-9.scope: Deactivated successfully. Jan 28 01:52:38.334162 systemd-logind[1586]: Session 9 logged out. Waiting for processes to exit. Jan 28 01:52:38.398578 systemd-logind[1586]: Removed session 9. Jan 28 01:52:39.194955 containerd[1609]: time="2026-01-28T01:52:39.193993318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654b4ddbfd-mgclm,Uid:3ef171ed-8146-4d6a-9063-eb31677aa1d4,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:52:40.275199 containerd[1609]: time="2026-01-28T01:52:40.273864610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654b4ddbfd-mbn64,Uid:ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:52:40.317218 containerd[1609]: time="2026-01-28T01:52:40.317037573Z" level=error msg="Failed to destroy network for sandbox \"cce72a2ff59064b2628a4b7a890581a1adbbdc6b4afcf8012d040a1a1c32d3d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:40.333956 systemd[1]: run-netns-cni\x2d7abe2814\x2d80ba\x2d6e9e\x2d9ffc\x2de3e71b4b7f18.mount: Deactivated successfully. Jan 28 01:52:40.379203 containerd[1609]: time="2026-01-28T01:52:40.371956445Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654b4ddbfd-mgclm,Uid:3ef171ed-8146-4d6a-9063-eb31677aa1d4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cce72a2ff59064b2628a4b7a890581a1adbbdc6b4afcf8012d040a1a1c32d3d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:40.394144 kubelet[2967]: E0128 01:52:40.388011 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cce72a2ff59064b2628a4b7a890581a1adbbdc6b4afcf8012d040a1a1c32d3d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:40.394144 kubelet[2967]: E0128 01:52:40.388111 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cce72a2ff59064b2628a4b7a890581a1adbbdc6b4afcf8012d040a1a1c32d3d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" Jan 28 01:52:40.394144 kubelet[2967]: E0128 01:52:40.388200 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cce72a2ff59064b2628a4b7a890581a1adbbdc6b4afcf8012d040a1a1c32d3d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" Jan 28 01:52:40.394934 kubelet[2967]: E0128 01:52:40.388299 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-654b4ddbfd-mgclm_calico-apiserver(3ef171ed-8146-4d6a-9063-eb31677aa1d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-654b4ddbfd-mgclm_calico-apiserver(3ef171ed-8146-4d6a-9063-eb31677aa1d4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cce72a2ff59064b2628a4b7a890581a1adbbdc6b4afcf8012d040a1a1c32d3d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" podUID="3ef171ed-8146-4d6a-9063-eb31677aa1d4" Jan 28 01:52:40.898962 containerd[1609]: time="2026-01-28T01:52:40.898381214Z" level=error msg="Failed to destroy network for sandbox \"ce0dc706b4e2c936fe62f13dafa6dc7b570308d2ae1258f4f0d7222616e3c57c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:40.935591 containerd[1609]: time="2026-01-28T01:52:40.925027707Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654b4ddbfd-mbn64,Uid:ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce0dc706b4e2c936fe62f13dafa6dc7b570308d2ae1258f4f0d7222616e3c57c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:40.925454 systemd[1]: run-netns-cni\x2dcf7ad663\x2d3c6e\x2dfb83\x2d26ef\x2da231bd97568f.mount: Deactivated successfully. Jan 28 01:52:40.939930 kubelet[2967]: E0128 01:52:40.937193 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce0dc706b4e2c936fe62f13dafa6dc7b570308d2ae1258f4f0d7222616e3c57c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:40.939930 kubelet[2967]: E0128 01:52:40.937291 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce0dc706b4e2c936fe62f13dafa6dc7b570308d2ae1258f4f0d7222616e3c57c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" Jan 28 01:52:40.939930 kubelet[2967]: E0128 01:52:40.937325 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce0dc706b4e2c936fe62f13dafa6dc7b570308d2ae1258f4f0d7222616e3c57c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" Jan 28 01:52:40.940136 kubelet[2967]: E0128 01:52:40.937394 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-654b4ddbfd-mbn64_calico-apiserver(ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-654b4ddbfd-mbn64_calico-apiserver(ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce0dc706b4e2c936fe62f13dafa6dc7b570308d2ae1258f4f0d7222616e3c57c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" podUID="ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9" Jan 28 01:52:42.253305 kubelet[2967]: E0128 01:52:42.252588 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:52:42.279300 containerd[1609]: time="2026-01-28T01:52:42.274365412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ms9md,Uid:d33e070d-1851-4242-98ee-97e68b203245,Namespace:calico-system,Attempt:0,}" Jan 28 01:52:42.285245 containerd[1609]: time="2026-01-28T01:52:42.285198674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gcgtc,Uid:95f14950-b00b-4ddf-81a4-ed49d84ddcff,Namespace:kube-system,Attempt:0,}" Jan 28 01:52:43.276046 kubelet[2967]: E0128 01:52:43.259649 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:52:43.274520 systemd[1]: Started sshd@8-10.0.0.85:22-10.0.0.1:55404.service - OpenSSH per-connection server daemon (10.0.0.1:55404). Jan 28 01:52:43.279389 containerd[1609]: time="2026-01-28T01:52:43.265981624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h25bw,Uid:0da3871e-a4b1-42ab-9e6b-d2183806355d,Namespace:kube-system,Attempt:0,}" Jan 28 01:52:43.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.85:22-10.0.0.1:55404 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:52:43.296942 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:52:43.297097 kernel: audit: type=1130 audit(1769565163.274:622): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.85:22-10.0.0.1:55404 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:52:44.002271 containerd[1609]: time="2026-01-28T01:52:44.001824157Z" level=error msg="Failed to destroy network for sandbox \"b8f90cc221cf32458be8bc1d232034459791a72e9f05d154c4fedc189c636edf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:44.020425 systemd[1]: run-netns-cni\x2d5c683c39\x2da071\x2d3438\x2d2c5d\x2d9a877b9dce35.mount: Deactivated successfully. Jan 28 01:52:44.101000 audit[4775]: USER_ACCT pid=4775 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:44.124908 kernel: audit: type=1101 audit(1769565164.101:623): pid=4775 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:44.124991 sshd[4775]: Accepted publickey for core from 10.0.0.1 port 55404 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:52:44.135000 audit[4775]: CRED_ACQ pid=4775 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:44.137508 sshd-session[4775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:52:44.174236 containerd[1609]: time="2026-01-28T01:52:44.171450798Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gcgtc,Uid:95f14950-b00b-4ddf-81a4-ed49d84ddcff,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8f90cc221cf32458be8bc1d232034459791a72e9f05d154c4fedc189c636edf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:44.174484 kubelet[2967]: E0128 01:52:44.172503 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8f90cc221cf32458be8bc1d232034459791a72e9f05d154c4fedc189c636edf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:44.174484 kubelet[2967]: E0128 01:52:44.172651 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8f90cc221cf32458be8bc1d232034459791a72e9f05d154c4fedc189c636edf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gcgtc" Jan 28 01:52:44.174484 kubelet[2967]: E0128 01:52:44.172972 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8f90cc221cf32458be8bc1d232034459791a72e9f05d154c4fedc189c636edf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gcgtc" Jan 28 01:52:44.174862 kubelet[2967]: E0128 01:52:44.173494 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-gcgtc_kube-system(95f14950-b00b-4ddf-81a4-ed49d84ddcff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-gcgtc_kube-system(95f14950-b00b-4ddf-81a4-ed49d84ddcff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8f90cc221cf32458be8bc1d232034459791a72e9f05d154c4fedc189c636edf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-gcgtc" podUID="95f14950-b00b-4ddf-81a4-ed49d84ddcff" Jan 28 01:52:44.207604 kernel: audit: type=1103 audit(1769565164.135:624): pid=4775 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:44.208314 kernel: audit: type=1006 audit(1769565164.135:625): pid=4775 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 28 01:52:44.135000 audit[4775]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd6c181ca0 a2=3 a3=0 items=0 ppid=1 pid=4775 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:52:44.223402 containerd[1609]: time="2026-01-28T01:52:44.223273092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-849fc56f8-v9sqx,Uid:67371941-5272-4e0e-84ef-cf7de9065a57,Namespace:calico-system,Attempt:0,}" Jan 28 01:52:44.229360 containerd[1609]: time="2026-01-28T01:52:44.229207534Z" level=error msg="Failed to destroy network for sandbox \"73fdd184e37e65545cf91160604e2c3647eabd09eeb517676061680b463eb44e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:44.135000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:52:44.270232 kernel: audit: type=1300 audit(1769565164.135:625): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd6c181ca0 a2=3 a3=0 items=0 ppid=1 pid=4775 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:52:44.270955 kernel: audit: type=1327 audit(1769565164.135:625): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:52:44.272905 systemd-logind[1586]: New session 10 of user core. Jan 28 01:52:44.297932 containerd[1609]: time="2026-01-28T01:52:44.294035809Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ms9md,Uid:d33e070d-1851-4242-98ee-97e68b203245,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"73fdd184e37e65545cf91160604e2c3647eabd09eeb517676061680b463eb44e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:44.298565 kubelet[2967]: E0128 01:52:44.296396 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73fdd184e37e65545cf91160604e2c3647eabd09eeb517676061680b463eb44e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:44.298565 kubelet[2967]: E0128 01:52:44.296566 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73fdd184e37e65545cf91160604e2c3647eabd09eeb517676061680b463eb44e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ms9md" Jan 28 01:52:44.298565 kubelet[2967]: E0128 01:52:44.296745 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73fdd184e37e65545cf91160604e2c3647eabd09eeb517676061680b463eb44e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ms9md" Jan 28 01:52:44.299283 kubelet[2967]: E0128 01:52:44.296928 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ms9md_calico-system(d33e070d-1851-4242-98ee-97e68b203245)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ms9md_calico-system(d33e070d-1851-4242-98ee-97e68b203245)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73fdd184e37e65545cf91160604e2c3647eabd09eeb517676061680b463eb44e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:52:44.299043 systemd[1]: run-netns-cni\x2df81a5e80\x2d04aa\x2d2e67\x2d8686\x2d88cf3355c00a.mount: Deactivated successfully. Jan 28 01:52:44.401178 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 28 01:52:44.420000 audit[4775]: USER_START pid=4775 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:44.459000 audit[4857]: CRED_ACQ pid=4857 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:44.570853 kernel: audit: type=1105 audit(1769565164.420:626): pid=4775 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:44.570962 kernel: audit: type=1103 audit(1769565164.459:627): pid=4857 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:44.570999 containerd[1609]: time="2026-01-28T01:52:44.563015112Z" level=error msg="Failed to destroy network for sandbox \"b4de5cd7f21c91d1c0f45ff773ef5b217356dc576c9feae10126b60864791594\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:44.583408 systemd[1]: run-netns-cni\x2d0abd9da3\x2d2a17\x2d2d15\x2dc141\x2da13f5c32e25c.mount: Deactivated successfully. Jan 28 01:52:44.600063 containerd[1609]: time="2026-01-28T01:52:44.599075823Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h25bw,Uid:0da3871e-a4b1-42ab-9e6b-d2183806355d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4de5cd7f21c91d1c0f45ff773ef5b217356dc576c9feae10126b60864791594\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:44.600302 kubelet[2967]: E0128 01:52:44.599535 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4de5cd7f21c91d1c0f45ff773ef5b217356dc576c9feae10126b60864791594\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:44.600302 kubelet[2967]: E0128 01:52:44.599602 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4de5cd7f21c91d1c0f45ff773ef5b217356dc576c9feae10126b60864791594\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-h25bw" Jan 28 01:52:44.600302 kubelet[2967]: E0128 01:52:44.599633 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4de5cd7f21c91d1c0f45ff773ef5b217356dc576c9feae10126b60864791594\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-h25bw" Jan 28 01:52:44.603758 kubelet[2967]: E0128 01:52:44.603597 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-h25bw_kube-system(0da3871e-a4b1-42ab-9e6b-d2183806355d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-h25bw_kube-system(0da3871e-a4b1-42ab-9e6b-d2183806355d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4de5cd7f21c91d1c0f45ff773ef5b217356dc576c9feae10126b60864791594\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-h25bw" podUID="0da3871e-a4b1-42ab-9e6b-d2183806355d" Jan 28 01:52:44.927937 containerd[1609]: time="2026-01-28T01:52:44.925368800Z" level=error msg="Failed to destroy network for sandbox \"5fe6351f8876de86a1f799f18031f544f5ce15bf82e9ec63eef152bcbf2f6df4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:44.938182 systemd[1]: run-netns-cni\x2d7a03ec40\x2d5b72\x2d4a63\x2dfab1\x2deee2676e5124.mount: Deactivated successfully. Jan 28 01:52:44.987935 containerd[1609]: time="2026-01-28T01:52:44.987012085Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-849fc56f8-v9sqx,Uid:67371941-5272-4e0e-84ef-cf7de9065a57,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fe6351f8876de86a1f799f18031f544f5ce15bf82e9ec63eef152bcbf2f6df4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:44.996096 kubelet[2967]: E0128 01:52:44.987484 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fe6351f8876de86a1f799f18031f544f5ce15bf82e9ec63eef152bcbf2f6df4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:44.996096 kubelet[2967]: E0128 01:52:44.988003 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fe6351f8876de86a1f799f18031f544f5ce15bf82e9ec63eef152bcbf2f6df4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" Jan 28 01:52:44.996096 kubelet[2967]: E0128 01:52:44.988047 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fe6351f8876de86a1f799f18031f544f5ce15bf82e9ec63eef152bcbf2f6df4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" Jan 28 01:52:44.996285 kubelet[2967]: E0128 01:52:44.988227 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-849fc56f8-v9sqx_calico-system(67371941-5272-4e0e-84ef-cf7de9065a57)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-849fc56f8-v9sqx_calico-system(67371941-5272-4e0e-84ef-cf7de9065a57)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5fe6351f8876de86a1f799f18031f544f5ce15bf82e9ec63eef152bcbf2f6df4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" podUID="67371941-5272-4e0e-84ef-cf7de9065a57" Jan 28 01:52:45.198144 containerd[1609]: time="2026-01-28T01:52:45.192049123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f4986c4f8-cxtwp,Uid:7c0bf93b-f071-4ad6-aeca-bf378e20fc97,Namespace:calico-system,Attempt:0,}" Jan 28 01:52:45.363440 sshd[4857]: Connection closed by 10.0.0.1 port 55404 Jan 28 01:52:45.366997 sshd-session[4775]: pam_unix(sshd:session): session closed for user core Jan 28 01:52:45.376000 audit[4775]: USER_END pid=4775 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:45.442832 kernel: audit: type=1106 audit(1769565165.376:628): pid=4775 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:45.379000 audit[4775]: CRED_DISP pid=4775 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:45.464266 systemd[1]: sshd@8-10.0.0.85:22-10.0.0.1:55404.service: Deactivated successfully. Jan 28 01:52:45.499484 kernel: audit: type=1104 audit(1769565165.379:629): pid=4775 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:45.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.85:22-10.0.0.1:55404 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:52:45.502360 systemd-logind[1586]: Session 10 logged out. Waiting for processes to exit. Jan 28 01:52:45.542442 systemd[1]: session-10.scope: Deactivated successfully. Jan 28 01:52:45.556112 systemd-logind[1586]: Removed session 10. Jan 28 01:52:46.154948 containerd[1609]: time="2026-01-28T01:52:46.151242246Z" level=error msg="Failed to destroy network for sandbox \"9ad99fa6ffe2eeb7640940f444ec5411465cca283b9f6906fa58da450f850de5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:46.161463 systemd[1]: run-netns-cni\x2d4f95d129\x2dea64\x2d62a3\x2dc4e2\x2d8e40da4d05c9.mount: Deactivated successfully. Jan 28 01:52:46.202065 containerd[1609]: time="2026-01-28T01:52:46.192264814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nv2sz,Uid:be8a6b52-634d-45dc-a492-0c042b64c6df,Namespace:calico-system,Attempt:0,}" Jan 28 01:52:46.268091 containerd[1609]: time="2026-01-28T01:52:46.261022497Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f4986c4f8-cxtwp,Uid:7c0bf93b-f071-4ad6-aeca-bf378e20fc97,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ad99fa6ffe2eeb7640940f444ec5411465cca283b9f6906fa58da450f850de5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:46.269153 kubelet[2967]: E0128 01:52:46.262943 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ad99fa6ffe2eeb7640940f444ec5411465cca283b9f6906fa58da450f850de5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:46.269153 kubelet[2967]: E0128 01:52:46.263136 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ad99fa6ffe2eeb7640940f444ec5411465cca283b9f6906fa58da450f850de5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f4986c4f8-cxtwp" Jan 28 01:52:46.269153 kubelet[2967]: E0128 01:52:46.263167 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ad99fa6ffe2eeb7640940f444ec5411465cca283b9f6906fa58da450f850de5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f4986c4f8-cxtwp" Jan 28 01:52:46.272981 kubelet[2967]: E0128 01:52:46.264606 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5f4986c4f8-cxtwp_calico-system(7c0bf93b-f071-4ad6-aeca-bf378e20fc97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5f4986c4f8-cxtwp_calico-system(7c0bf93b-f071-4ad6-aeca-bf378e20fc97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ad99fa6ffe2eeb7640940f444ec5411465cca283b9f6906fa58da450f850de5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5f4986c4f8-cxtwp" podUID="7c0bf93b-f071-4ad6-aeca-bf378e20fc97" Jan 28 01:52:46.839259 containerd[1609]: time="2026-01-28T01:52:46.836919776Z" level=error msg="Failed to destroy network for sandbox \"ec0be3574ae23ab75aa5855c09e97eab4e596b8eb21f814896bd7eec509d47ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:46.938122 containerd[1609]: time="2026-01-28T01:52:46.875918652Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nv2sz,Uid:be8a6b52-634d-45dc-a492-0c042b64c6df,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec0be3574ae23ab75aa5855c09e97eab4e596b8eb21f814896bd7eec509d47ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:46.939849 kubelet[2967]: E0128 01:52:46.878598 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec0be3574ae23ab75aa5855c09e97eab4e596b8eb21f814896bd7eec509d47ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:46.939849 kubelet[2967]: E0128 01:52:46.879365 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec0be3574ae23ab75aa5855c09e97eab4e596b8eb21f814896bd7eec509d47ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-nv2sz" Jan 28 01:52:46.939849 kubelet[2967]: E0128 01:52:46.879555 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec0be3574ae23ab75aa5855c09e97eab4e596b8eb21f814896bd7eec509d47ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-nv2sz" Jan 28 01:52:46.869143 systemd[1]: run-netns-cni\x2d087b764b\x2d76c3\x2d8373\x2dae74\x2d40c9887986bf.mount: Deactivated successfully. Jan 28 01:52:46.974215 kubelet[2967]: E0128 01:52:46.880406 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-nv2sz_calico-system(be8a6b52-634d-45dc-a492-0c042b64c6df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-nv2sz_calico-system(be8a6b52-634d-45dc-a492-0c042b64c6df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec0be3574ae23ab75aa5855c09e97eab4e596b8eb21f814896bd7eec509d47ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-nv2sz" podUID="be8a6b52-634d-45dc-a492-0c042b64c6df" Jan 28 01:52:50.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.85:22-10.0.0.1:55420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:52:50.510136 systemd[1]: Started sshd@9-10.0.0.85:22-10.0.0.1:55420.service - OpenSSH per-connection server daemon (10.0.0.1:55420). Jan 28 01:52:50.517496 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:52:50.517856 kernel: audit: type=1130 audit(1769565170.508:631): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.85:22-10.0.0.1:55420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:52:51.125000 audit[4958]: USER_ACCT pid=4958 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:51.158562 kernel: audit: type=1101 audit(1769565171.125:632): pid=4958 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:51.168561 kernel: audit: type=1103 audit(1769565171.164:633): pid=4958 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:51.164000 audit[4958]: CRED_ACQ pid=4958 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:51.169505 sshd[4958]: Accepted publickey for core from 10.0.0.1 port 55420 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:52:51.217269 kernel: audit: type=1006 audit(1769565171.164:634): pid=4958 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 28 01:52:51.217516 kernel: audit: type=1300 audit(1769565171.164:634): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc64ba04a0 a2=3 a3=0 items=0 ppid=1 pid=4958 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:52:51.164000 audit[4958]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc64ba04a0 a2=3 a3=0 items=0 ppid=1 pid=4958 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:52:51.179362 sshd-session[4958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:52:51.164000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:52:51.249808 kernel: audit: type=1327 audit(1769565171.164:634): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:52:51.261175 systemd-logind[1586]: New session 11 of user core. Jan 28 01:52:51.298905 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 28 01:52:51.339000 audit[4958]: USER_START pid=4958 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:51.425915 kernel: audit: type=1105 audit(1769565171.339:635): pid=4958 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:51.368000 audit[4962]: CRED_ACQ pid=4962 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:51.519128 kernel: audit: type=1103 audit(1769565171.368:636): pid=4962 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:53.003157 sshd[4962]: Connection closed by 10.0.0.1 port 55420 Jan 28 01:52:53.005304 sshd-session[4958]: pam_unix(sshd:session): session closed for user core Jan 28 01:52:53.030000 audit[4958]: USER_END pid=4958 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:53.052113 systemd-logind[1586]: Session 11 logged out. Waiting for processes to exit. Jan 28 01:52:53.056325 systemd[1]: sshd@9-10.0.0.85:22-10.0.0.1:55420.service: Deactivated successfully. Jan 28 01:52:53.072043 kernel: audit: type=1106 audit(1769565173.030:637): pid=4958 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:53.031000 audit[4958]: CRED_DISP pid=4958 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:53.083936 systemd[1]: session-11.scope: Deactivated successfully. Jan 28 01:52:53.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.85:22-10.0.0.1:55420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:52:53.138315 kernel: audit: type=1104 audit(1769565173.031:638): pid=4958 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:53.130447 systemd-logind[1586]: Removed session 11. Jan 28 01:52:53.236271 containerd[1609]: time="2026-01-28T01:52:53.233183953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654b4ddbfd-mbn64,Uid:ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:52:54.791873 containerd[1609]: time="2026-01-28T01:52:54.791436390Z" level=error msg="Failed to destroy network for sandbox \"ec093fd771034a8b18acf7f9ee2f9b51e5673405e63c7168efe00eb580a15853\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:54.814558 systemd[1]: run-netns-cni\x2d868aa191\x2d5ea4\x2ddeb9\x2da38d\x2d51b09659e8b6.mount: Deactivated successfully. Jan 28 01:52:54.878931 containerd[1609]: time="2026-01-28T01:52:54.878851379Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654b4ddbfd-mbn64,Uid:ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec093fd771034a8b18acf7f9ee2f9b51e5673405e63c7168efe00eb580a15853\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:54.882532 kubelet[2967]: E0128 01:52:54.882051 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec093fd771034a8b18acf7f9ee2f9b51e5673405e63c7168efe00eb580a15853\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:54.882532 kubelet[2967]: E0128 01:52:54.882142 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec093fd771034a8b18acf7f9ee2f9b51e5673405e63c7168efe00eb580a15853\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" Jan 28 01:52:54.882532 kubelet[2967]: E0128 01:52:54.882171 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec093fd771034a8b18acf7f9ee2f9b51e5673405e63c7168efe00eb580a15853\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" Jan 28 01:52:54.883572 kubelet[2967]: E0128 01:52:54.882835 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-654b4ddbfd-mbn64_calico-apiserver(ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-654b4ddbfd-mbn64_calico-apiserver(ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec093fd771034a8b18acf7f9ee2f9b51e5673405e63c7168efe00eb580a15853\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" podUID="ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9" Jan 28 01:52:55.215076 containerd[1609]: time="2026-01-28T01:52:55.214199989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ms9md,Uid:d33e070d-1851-4242-98ee-97e68b203245,Namespace:calico-system,Attempt:0,}" Jan 28 01:52:55.217628 containerd[1609]: time="2026-01-28T01:52:55.215464495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654b4ddbfd-mgclm,Uid:3ef171ed-8146-4d6a-9063-eb31677aa1d4,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:52:55.336021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2650998988.mount: Deactivated successfully. Jan 28 01:52:55.554368 containerd[1609]: time="2026-01-28T01:52:55.553661902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:52:55.562347 containerd[1609]: time="2026-01-28T01:52:55.560912570Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 28 01:52:55.563127 containerd[1609]: time="2026-01-28T01:52:55.563039174Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:52:55.600254 containerd[1609]: time="2026-01-28T01:52:55.600194601Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:52:55.603187 containerd[1609]: time="2026-01-28T01:52:55.602966392Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 1m5.914979105s" Jan 28 01:52:55.603187 containerd[1609]: time="2026-01-28T01:52:55.603048714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 28 01:52:55.609882 containerd[1609]: time="2026-01-28T01:52:55.609611263Z" level=error msg="Failed to destroy network for sandbox \"3446a610d86fd043adfe63d88b4029c420ef1baf5e2be44ec354c0eb902542be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:55.629032 systemd[1]: run-netns-cni\x2de4a07444\x2d7788\x2d0286\x2d4f37\x2d55e3755325f0.mount: Deactivated successfully. Jan 28 01:52:55.642083 containerd[1609]: time="2026-01-28T01:52:55.638914372Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654b4ddbfd-mgclm,Uid:3ef171ed-8146-4d6a-9063-eb31677aa1d4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3446a610d86fd043adfe63d88b4029c420ef1baf5e2be44ec354c0eb902542be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:55.642329 kubelet[2967]: E0128 01:52:55.640827 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3446a610d86fd043adfe63d88b4029c420ef1baf5e2be44ec354c0eb902542be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:55.642329 kubelet[2967]: E0128 01:52:55.641653 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3446a610d86fd043adfe63d88b4029c420ef1baf5e2be44ec354c0eb902542be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" Jan 28 01:52:55.643266 kubelet[2967]: E0128 01:52:55.642652 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3446a610d86fd043adfe63d88b4029c420ef1baf5e2be44ec354c0eb902542be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" Jan 28 01:52:55.654908 kubelet[2967]: E0128 01:52:55.651568 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-654b4ddbfd-mgclm_calico-apiserver(3ef171ed-8146-4d6a-9063-eb31677aa1d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-654b4ddbfd-mgclm_calico-apiserver(3ef171ed-8146-4d6a-9063-eb31677aa1d4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3446a610d86fd043adfe63d88b4029c420ef1baf5e2be44ec354c0eb902542be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" podUID="3ef171ed-8146-4d6a-9063-eb31677aa1d4" Jan 28 01:52:55.687761 containerd[1609]: time="2026-01-28T01:52:55.687583147Z" level=info msg="CreateContainer within sandbox \"03c4c464ce884579f86c8423c1f1c099c051c3b727b3c1b00c231655d3b90b5e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 28 01:52:55.751384 containerd[1609]: time="2026-01-28T01:52:55.751308333Z" level=error msg="Failed to destroy network for sandbox \"d923e810fd5261e62dc7a46cbe4cb40d20445a4e6917892d83ee9dd00ec3482f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:55.752192 containerd[1609]: time="2026-01-28T01:52:55.751319746Z" level=info msg="Container cfe1d753dc41ba4f5abc7edf74b3294df038bf0a1abd52ceb41d9421bf1f7d14: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:52:55.791602 containerd[1609]: time="2026-01-28T01:52:55.789024887Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ms9md,Uid:d33e070d-1851-4242-98ee-97e68b203245,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d923e810fd5261e62dc7a46cbe4cb40d20445a4e6917892d83ee9dd00ec3482f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:55.799956 kubelet[2967]: E0128 01:52:55.793654 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d923e810fd5261e62dc7a46cbe4cb40d20445a4e6917892d83ee9dd00ec3482f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:55.799956 kubelet[2967]: E0128 01:52:55.796597 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d923e810fd5261e62dc7a46cbe4cb40d20445a4e6917892d83ee9dd00ec3482f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ms9md" Jan 28 01:52:55.799956 kubelet[2967]: E0128 01:52:55.797000 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d923e810fd5261e62dc7a46cbe4cb40d20445a4e6917892d83ee9dd00ec3482f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ms9md" Jan 28 01:52:55.800222 kubelet[2967]: E0128 01:52:55.797437 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ms9md_calico-system(d33e070d-1851-4242-98ee-97e68b203245)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ms9md_calico-system(d33e070d-1851-4242-98ee-97e68b203245)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d923e810fd5261e62dc7a46cbe4cb40d20445a4e6917892d83ee9dd00ec3482f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:52:55.814326 systemd[1]: run-netns-cni\x2d8e8ee1a5\x2d950f\x2d48b9\x2dbcfe\x2d7e001e6a9a61.mount: Deactivated successfully. Jan 28 01:52:55.925272 containerd[1609]: time="2026-01-28T01:52:55.922933240Z" level=info msg="CreateContainer within sandbox \"03c4c464ce884579f86c8423c1f1c099c051c3b727b3c1b00c231655d3b90b5e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"cfe1d753dc41ba4f5abc7edf74b3294df038bf0a1abd52ceb41d9421bf1f7d14\"" Jan 28 01:52:55.928993 containerd[1609]: time="2026-01-28T01:52:55.927339109Z" level=info msg="StartContainer for \"cfe1d753dc41ba4f5abc7edf74b3294df038bf0a1abd52ceb41d9421bf1f7d14\"" Jan 28 01:52:55.958764 containerd[1609]: time="2026-01-28T01:52:55.958624503Z" level=info msg="connecting to shim cfe1d753dc41ba4f5abc7edf74b3294df038bf0a1abd52ceb41d9421bf1f7d14" address="unix:///run/containerd/s/f9a591991b1f8683c4081b2a7c46b7155d5ae3c18b3032a72e2c0d88fbcf5b12" protocol=ttrpc version=3 Jan 28 01:52:56.234068 containerd[1609]: time="2026-01-28T01:52:56.229917751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-849fc56f8-v9sqx,Uid:67371941-5272-4e0e-84ef-cf7de9065a57,Namespace:calico-system,Attempt:0,}" Jan 28 01:52:56.441633 systemd[1]: Started cri-containerd-cfe1d753dc41ba4f5abc7edf74b3294df038bf0a1abd52ceb41d9421bf1f7d14.scope - libcontainer container cfe1d753dc41ba4f5abc7edf74b3294df038bf0a1abd52ceb41d9421bf1f7d14. Jan 28 01:52:56.897000 audit: BPF prog-id=183 op=LOAD Jan 28 01:52:56.904887 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:52:56.905013 kernel: audit: type=1334 audit(1769565176.897:640): prog-id=183 op=LOAD Jan 28 01:52:56.897000 audit[5079]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3665 pid=5079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:52:56.971830 kernel: audit: type=1300 audit(1769565176.897:640): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3665 pid=5079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:52:56.897000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366653164373533646334316261346635616263376564663734623332 Jan 28 01:52:57.013927 kernel: audit: type=1327 audit(1769565176.897:640): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366653164373533646334316261346635616263376564663734623332 Jan 28 01:52:57.014060 kernel: audit: type=1334 audit(1769565176.907:641): prog-id=184 op=LOAD Jan 28 01:52:56.907000 audit: BPF prog-id=184 op=LOAD Jan 28 01:52:57.014233 containerd[1609]: time="2026-01-28T01:52:56.992048218Z" level=error msg="Failed to destroy network for sandbox \"331f903a0eab56dc2c7b3e17848b24538924a4882ba7455ec68f728101609e27\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:56.907000 audit[5079]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3665 pid=5079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:52:57.010436 systemd[1]: run-netns-cni\x2d47f07c3d\x2d1041\x2dce73\x2de789\x2d7935fe441af3.mount: Deactivated successfully. Jan 28 01:52:57.045128 kernel: audit: type=1300 audit(1769565176.907:641): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3665 pid=5079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:52:57.045287 kernel: audit: type=1327 audit(1769565176.907:641): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366653164373533646334316261346635616263376564663734623332 Jan 28 01:52:56.907000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366653164373533646334316261346635616263376564663734623332 Jan 28 01:52:57.088897 kernel: audit: type=1334 audit(1769565176.907:642): prog-id=184 op=UNLOAD Jan 28 01:52:56.907000 audit: BPF prog-id=184 op=UNLOAD Jan 28 01:52:57.097077 containerd[1609]: time="2026-01-28T01:52:57.096872756Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-849fc56f8-v9sqx,Uid:67371941-5272-4e0e-84ef-cf7de9065a57,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"331f903a0eab56dc2c7b3e17848b24538924a4882ba7455ec68f728101609e27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:56.907000 audit[5079]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3665 pid=5079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:52:57.100455 kubelet[2967]: E0128 01:52:57.100034 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"331f903a0eab56dc2c7b3e17848b24538924a4882ba7455ec68f728101609e27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:57.101968 kubelet[2967]: E0128 01:52:57.101062 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"331f903a0eab56dc2c7b3e17848b24538924a4882ba7455ec68f728101609e27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" Jan 28 01:52:57.101968 kubelet[2967]: E0128 01:52:57.101860 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"331f903a0eab56dc2c7b3e17848b24538924a4882ba7455ec68f728101609e27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" Jan 28 01:52:57.102832 kubelet[2967]: E0128 01:52:57.102620 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-849fc56f8-v9sqx_calico-system(67371941-5272-4e0e-84ef-cf7de9065a57)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-849fc56f8-v9sqx_calico-system(67371941-5272-4e0e-84ef-cf7de9065a57)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"331f903a0eab56dc2c7b3e17848b24538924a4882ba7455ec68f728101609e27\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" podUID="67371941-5272-4e0e-84ef-cf7de9065a57" Jan 28 01:52:57.140565 kernel: audit: type=1300 audit(1769565176.907:642): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3665 pid=5079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:52:56.907000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366653164373533646334316261346635616263376564663734623332 Jan 28 01:52:57.195898 kernel: audit: type=1327 audit(1769565176.907:642): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366653164373533646334316261346635616263376564663734623332 Jan 28 01:52:56.907000 audit: BPF prog-id=183 op=UNLOAD Jan 28 01:52:57.203919 kubelet[2967]: E0128 01:52:57.196285 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:52:56.907000 audit[5079]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3665 pid=5079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:52:56.907000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366653164373533646334316261346635616263376564663734623332 Jan 28 01:52:56.907000 audit: BPF prog-id=185 op=LOAD Jan 28 01:52:57.204927 kernel: audit: type=1334 audit(1769565176.907:643): prog-id=183 op=UNLOAD Jan 28 01:52:56.907000 audit[5079]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3665 pid=5079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:52:56.907000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366653164373533646334316261346635616263376564663734623332 Jan 28 01:52:57.210110 containerd[1609]: time="2026-01-28T01:52:57.209582021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h25bw,Uid:0da3871e-a4b1-42ab-9e6b-d2183806355d,Namespace:kube-system,Attempt:0,}" Jan 28 01:52:57.359615 containerd[1609]: time="2026-01-28T01:52:57.359397633Z" level=info msg="StartContainer for \"cfe1d753dc41ba4f5abc7edf74b3294df038bf0a1abd52ceb41d9421bf1f7d14\" returns successfully" Jan 28 01:52:57.650344 containerd[1609]: time="2026-01-28T01:52:57.650087163Z" level=error msg="Failed to destroy network for sandbox \"d8c5263617bed8764cb6a5db0b83af258166156dca8f06792f1bfa1f4718b6e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:57.661099 systemd[1]: run-netns-cni\x2d04a7b097\x2d1839\x2dfa60\x2d316b\x2da5b56d6c48b3.mount: Deactivated successfully. Jan 28 01:52:57.675510 containerd[1609]: time="2026-01-28T01:52:57.674974583Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h25bw,Uid:0da3871e-a4b1-42ab-9e6b-d2183806355d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8c5263617bed8764cb6a5db0b83af258166156dca8f06792f1bfa1f4718b6e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:57.679344 kubelet[2967]: E0128 01:52:57.676447 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8c5263617bed8764cb6a5db0b83af258166156dca8f06792f1bfa1f4718b6e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:57.679344 kubelet[2967]: E0128 01:52:57.676527 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8c5263617bed8764cb6a5db0b83af258166156dca8f06792f1bfa1f4718b6e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-h25bw" Jan 28 01:52:57.679344 kubelet[2967]: E0128 01:52:57.676559 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8c5263617bed8764cb6a5db0b83af258166156dca8f06792f1bfa1f4718b6e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-h25bw" Jan 28 01:52:57.679550 kubelet[2967]: E0128 01:52:57.676631 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-h25bw_kube-system(0da3871e-a4b1-42ab-9e6b-d2183806355d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-h25bw_kube-system(0da3871e-a4b1-42ab-9e6b-d2183806355d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8c5263617bed8764cb6a5db0b83af258166156dca8f06792f1bfa1f4718b6e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-h25bw" podUID="0da3871e-a4b1-42ab-9e6b-d2183806355d" Jan 28 01:52:58.167160 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 28 01:52:58.167317 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 28 01:52:58.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.85:22-10.0.0.1:57786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:52:58.167494 kubelet[2967]: E0128 01:52:58.160869 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:52:58.161446 systemd[1]: Started sshd@10-10.0.0.85:22-10.0.0.1:57786.service - OpenSSH per-connection server daemon (10.0.0.1:57786). Jan 28 01:52:58.353999 containerd[1609]: time="2026-01-28T01:52:58.353925990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f4986c4f8-cxtwp,Uid:7c0bf93b-f071-4ad6-aeca-bf378e20fc97,Namespace:calico-system,Attempt:0,}" Jan 28 01:52:58.583662 kubelet[2967]: I0128 01:52:58.564178 2967 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wkj6h" podStartSLOduration=9.458268888 podStartE2EDuration="1m48.564155553s" podCreationTimestamp="2026-01-28 01:51:10 +0000 UTC" firstStartedPulling="2026-01-28 01:51:16.500196152 +0000 UTC m=+163.105107821" lastFinishedPulling="2026-01-28 01:52:55.606082817 +0000 UTC m=+262.210994486" observedRunningTime="2026-01-28 01:52:58.541605281 +0000 UTC m=+265.146516970" watchObservedRunningTime="2026-01-28 01:52:58.564155553 +0000 UTC m=+265.169067221" Jan 28 01:52:58.824823 containerd[1609]: time="2026-01-28T01:52:58.824620787Z" level=error msg="Failed to destroy network for sandbox \"f42089a7eceea373334355d12a566b8476c21958102c8c47ea10143f0829ce95\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:58.830103 systemd[1]: run-netns-cni\x2d0a6fe5b4\x2dd3b7\x2dfdcd\x2dd43f\x2d01612aa4a276.mount: Deactivated successfully. Jan 28 01:52:58.838852 containerd[1609]: time="2026-01-28T01:52:58.838018153Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f4986c4f8-cxtwp,Uid:7c0bf93b-f071-4ad6-aeca-bf378e20fc97,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f42089a7eceea373334355d12a566b8476c21958102c8c47ea10143f0829ce95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:58.844004 kubelet[2967]: E0128 01:52:58.843948 2967 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f42089a7eceea373334355d12a566b8476c21958102c8c47ea10143f0829ce95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:52:58.843000 audit[5185]: USER_ACCT pid=5185 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:58.845925 kubelet[2967]: E0128 01:52:58.844635 2967 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f42089a7eceea373334355d12a566b8476c21958102c8c47ea10143f0829ce95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f4986c4f8-cxtwp" Jan 28 01:52:58.846000 sshd[5185]: Accepted publickey for core from 10.0.0.1 port 57786 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:52:58.846000 audit[5185]: CRED_ACQ pid=5185 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:58.846000 audit[5185]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe7e845ce0 a2=3 a3=0 items=0 ppid=1 pid=5185 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:52:58.846000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:52:58.850466 sshd-session[5185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:52:58.854831 kubelet[2967]: E0128 01:52:58.854011 2967 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f42089a7eceea373334355d12a566b8476c21958102c8c47ea10143f0829ce95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f4986c4f8-cxtwp" Jan 28 01:52:58.855048 kubelet[2967]: E0128 01:52:58.854817 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5f4986c4f8-cxtwp_calico-system(7c0bf93b-f071-4ad6-aeca-bf378e20fc97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5f4986c4f8-cxtwp_calico-system(7c0bf93b-f071-4ad6-aeca-bf378e20fc97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f42089a7eceea373334355d12a566b8476c21958102c8c47ea10143f0829ce95\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5f4986c4f8-cxtwp" podUID="7c0bf93b-f071-4ad6-aeca-bf378e20fc97" Jan 28 01:52:58.877031 systemd-logind[1586]: New session 12 of user core. Jan 28 01:52:58.891893 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 28 01:52:58.901000 audit[5185]: USER_START pid=5185 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:58.910000 audit[5245]: CRED_ACQ pid=5245 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:59.178867 kubelet[2967]: E0128 01:52:59.167097 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:52:59.204135 containerd[1609]: time="2026-01-28T01:52:59.203662296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nv2sz,Uid:be8a6b52-634d-45dc-a492-0c042b64c6df,Namespace:calico-system,Attempt:0,}" Jan 28 01:52:59.214991 kubelet[2967]: E0128 01:52:59.209322 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:52:59.215180 containerd[1609]: time="2026-01-28T01:52:59.209792147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gcgtc,Uid:95f14950-b00b-4ddf-81a4-ed49d84ddcff,Namespace:kube-system,Attempt:0,}" Jan 28 01:52:59.510171 kubelet[2967]: I0128 01:52:59.509934 2967 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c0bf93b-f071-4ad6-aeca-bf378e20fc97-whisker-ca-bundle\") pod \"7c0bf93b-f071-4ad6-aeca-bf378e20fc97\" (UID: \"7c0bf93b-f071-4ad6-aeca-bf378e20fc97\") " Jan 28 01:52:59.512529 kubelet[2967]: I0128 01:52:59.510460 2967 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7c0bf93b-f071-4ad6-aeca-bf378e20fc97-whisker-backend-key-pair\") pod \"7c0bf93b-f071-4ad6-aeca-bf378e20fc97\" (UID: \"7c0bf93b-f071-4ad6-aeca-bf378e20fc97\") " Jan 28 01:52:59.512529 kubelet[2967]: I0128 01:52:59.510494 2967 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkzbl\" (UniqueName: \"kubernetes.io/projected/7c0bf93b-f071-4ad6-aeca-bf378e20fc97-kube-api-access-rkzbl\") pod \"7c0bf93b-f071-4ad6-aeca-bf378e20fc97\" (UID: \"7c0bf93b-f071-4ad6-aeca-bf378e20fc97\") " Jan 28 01:52:59.524365 kubelet[2967]: I0128 01:52:59.513346 2967 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c0bf93b-f071-4ad6-aeca-bf378e20fc97-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7c0bf93b-f071-4ad6-aeca-bf378e20fc97" (UID: "7c0bf93b-f071-4ad6-aeca-bf378e20fc97"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 28 01:52:59.571309 sshd[5245]: Connection closed by 10.0.0.1 port 57786 Jan 28 01:52:59.570526 sshd-session[5185]: pam_unix(sshd:session): session closed for user core Jan 28 01:52:59.577254 systemd[1]: var-lib-kubelet-pods-7c0bf93b\x2df071\x2d4ad6\x2daeca\x2dbf378e20fc97-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drkzbl.mount: Deactivated successfully. Jan 28 01:52:59.580000 audit[5185]: USER_END pid=5185 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:59.580000 audit[5185]: CRED_DISP pid=5185 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:52:59.592998 systemd[1]: sshd@10-10.0.0.85:22-10.0.0.1:57786.service: Deactivated successfully. Jan 28 01:52:59.604931 kubelet[2967]: I0128 01:52:59.601050 2967 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c0bf93b-f071-4ad6-aeca-bf378e20fc97-kube-api-access-rkzbl" (OuterVolumeSpecName: "kube-api-access-rkzbl") pod "7c0bf93b-f071-4ad6-aeca-bf378e20fc97" (UID: "7c0bf93b-f071-4ad6-aeca-bf378e20fc97"). InnerVolumeSpecName "kube-api-access-rkzbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 01:52:59.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.85:22-10.0.0.1:57786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:52:59.638122 kubelet[2967]: I0128 01:52:59.625104 2967 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c0bf93b-f071-4ad6-aeca-bf378e20fc97-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 28 01:52:59.638122 kubelet[2967]: I0128 01:52:59.625153 2967 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rkzbl\" (UniqueName: \"kubernetes.io/projected/7c0bf93b-f071-4ad6-aeca-bf378e20fc97-kube-api-access-rkzbl\") on node \"localhost\" DevicePath \"\"" Jan 28 01:52:59.647654 kubelet[2967]: I0128 01:52:59.642973 2967 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c0bf93b-f071-4ad6-aeca-bf378e20fc97-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7c0bf93b-f071-4ad6-aeca-bf378e20fc97" (UID: "7c0bf93b-f071-4ad6-aeca-bf378e20fc97"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 28 01:52:59.649593 systemd[1]: var-lib-kubelet-pods-7c0bf93b\x2df071\x2d4ad6\x2daeca\x2dbf378e20fc97-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 28 01:52:59.652603 systemd[1]: session-12.scope: Deactivated successfully. Jan 28 01:52:59.699430 systemd-logind[1586]: Session 12 logged out. Waiting for processes to exit. Jan 28 01:52:59.711904 systemd-logind[1586]: Removed session 12. Jan 28 01:52:59.725896 kubelet[2967]: I0128 01:52:59.725622 2967 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7c0bf93b-f071-4ad6-aeca-bf378e20fc97-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 28 01:53:00.198463 kubelet[2967]: E0128 01:53:00.176488 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:53:00.319973 systemd[1]: Removed slice kubepods-besteffort-pod7c0bf93b_f071_4ad6_aeca_bf378e20fc97.slice - libcontainer container kubepods-besteffort-pod7c0bf93b_f071_4ad6_aeca_bf378e20fc97.slice. Jan 28 01:53:01.052252 systemd[1]: Created slice kubepods-besteffort-podf9057416_92cd_485c_b269_9b046834d5f3.slice - libcontainer container kubepods-besteffort-podf9057416_92cd_485c_b269_9b046834d5f3.slice. Jan 28 01:53:01.108342 kubelet[2967]: I0128 01:53:01.108242 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f9057416-92cd-485c-b269-9b046834d5f3-whisker-backend-key-pair\") pod \"whisker-7fb5cb5d8-9zmvs\" (UID: \"f9057416-92cd-485c-b269-9b046834d5f3\") " pod="calico-system/whisker-7fb5cb5d8-9zmvs" Jan 28 01:53:01.108624 kubelet[2967]: I0128 01:53:01.108365 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z9qq\" (UniqueName: \"kubernetes.io/projected/f9057416-92cd-485c-b269-9b046834d5f3-kube-api-access-2z9qq\") pod \"whisker-7fb5cb5d8-9zmvs\" (UID: \"f9057416-92cd-485c-b269-9b046834d5f3\") " pod="calico-system/whisker-7fb5cb5d8-9zmvs" Jan 28 01:53:01.108624 kubelet[2967]: I0128 01:53:01.108405 2967 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9057416-92cd-485c-b269-9b046834d5f3-whisker-ca-bundle\") pod \"whisker-7fb5cb5d8-9zmvs\" (UID: \"f9057416-92cd-485c-b269-9b046834d5f3\") " pod="calico-system/whisker-7fb5cb5d8-9zmvs" Jan 28 01:53:01.386849 containerd[1609]: time="2026-01-28T01:53:01.386541771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7fb5cb5d8-9zmvs,Uid:f9057416-92cd-485c-b269-9b046834d5f3,Namespace:calico-system,Attempt:0,}" Jan 28 01:53:02.216792 kubelet[2967]: I0128 01:53:02.213280 2967 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c0bf93b-f071-4ad6-aeca-bf378e20fc97" path="/var/lib/kubelet/pods/7c0bf93b-f071-4ad6-aeca-bf378e20fc97/volumes" Jan 28 01:53:02.717151 systemd-networkd[1515]: cali5d618e96467: Link UP Jan 28 01:53:02.717586 systemd-networkd[1515]: cali5d618e96467: Gained carrier Jan 28 01:53:02.917906 containerd[1609]: 2026-01-28 01:52:59.650 [INFO][5258] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 01:53:02.917906 containerd[1609]: 2026-01-28 01:53:00.029 [INFO][5258] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--gcgtc-eth0 coredns-674b8bbfcf- kube-system 95f14950-b00b-4ddf-81a4-ed49d84ddcff 1265 0 2026-01-28 01:48:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-gcgtc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5d618e96467 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1a06a69ed842454f7b4a8690431a86c6694ed54ad50513664b73b4f8fa09189a" Namespace="kube-system" Pod="coredns-674b8bbfcf-gcgtc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gcgtc-" Jan 28 01:53:02.917906 containerd[1609]: 2026-01-28 01:53:00.029 [INFO][5258] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1a06a69ed842454f7b4a8690431a86c6694ed54ad50513664b73b4f8fa09189a" Namespace="kube-system" Pod="coredns-674b8bbfcf-gcgtc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gcgtc-eth0" Jan 28 01:53:02.917906 containerd[1609]: 2026-01-28 01:53:01.062 [INFO][5330] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a06a69ed842454f7b4a8690431a86c6694ed54ad50513664b73b4f8fa09189a" HandleID="k8s-pod-network.1a06a69ed842454f7b4a8690431a86c6694ed54ad50513664b73b4f8fa09189a" Workload="localhost-k8s-coredns--674b8bbfcf--gcgtc-eth0" Jan 28 01:53:02.929383 containerd[1609]: 2026-01-28 01:53:01.065 [INFO][5330] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1a06a69ed842454f7b4a8690431a86c6694ed54ad50513664b73b4f8fa09189a" HandleID="k8s-pod-network.1a06a69ed842454f7b4a8690431a86c6694ed54ad50513664b73b4f8fa09189a" Workload="localhost-k8s-coredns--674b8bbfcf--gcgtc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004eb220), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-gcgtc", "timestamp":"2026-01-28 01:53:01.062527637 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:53:02.929383 containerd[1609]: 2026-01-28 01:53:01.065 [INFO][5330] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:53:02.929383 containerd[1609]: 2026-01-28 01:53:01.066 [INFO][5330] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:53:02.929383 containerd[1609]: 2026-01-28 01:53:01.067 [INFO][5330] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:53:02.929383 containerd[1609]: 2026-01-28 01:53:01.153 [INFO][5330] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1a06a69ed842454f7b4a8690431a86c6694ed54ad50513664b73b4f8fa09189a" host="localhost" Jan 28 01:53:02.929383 containerd[1609]: 2026-01-28 01:53:01.327 [INFO][5330] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:53:02.929383 containerd[1609]: 2026-01-28 01:53:01.428 [INFO][5330] ipam/ipam.go 543: Ran out of existing affine blocks for host host="localhost" Jan 28 01:53:02.929383 containerd[1609]: 2026-01-28 01:53:01.477 [INFO][5330] ipam/ipam.go 560: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="localhost" Jan 28 01:53:02.929383 containerd[1609]: 2026-01-28 01:53:01.561 [INFO][5330] ipam/ipam_block_reader_writer.go 158: Found free block: 192.168.88.128/26 Jan 28 01:53:02.929383 containerd[1609]: 2026-01-28 01:53:01.566 [INFO][5330] ipam/ipam.go 572: Found unclaimed block host="localhost" subnet=192.168.88.128/26 Jan 28 01:53:02.929383 containerd[1609]: 2026-01-28 01:53:01.566 [INFO][5330] ipam/ipam_block_reader_writer.go 175: Trying to create affinity in pending state host="localhost" subnet=192.168.88.128/26 Jan 28 01:53:02.935024 containerd[1609]: 2026-01-28 01:53:01.642 [INFO][5330] ipam/ipam_block_reader_writer.go 186: Block affinity already exists, getting existing affinity host="localhost" subnet=192.168.88.128/26 Jan 28 01:53:02.935024 containerd[1609]: 2026-01-28 01:53:01.678 [INFO][5330] ipam/ipam_block_reader_writer.go 194: Got existing affinity host="localhost" subnet=192.168.88.128/26 Jan 28 01:53:02.935024 containerd[1609]: 2026-01-28 01:53:01.678 [INFO][5330] ipam/ipam_block_reader_writer.go 198: Marking existing affinity with current state pending as pending host="localhost" subnet=192.168.88.128/26 Jan 28 01:53:02.935024 containerd[1609]: 2026-01-28 01:53:01.765 [INFO][5330] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:53:02.935024 containerd[1609]: 2026-01-28 01:53:01.783 [INFO][5330] ipam/ipam.go 208: Affinity has not been confirmed - attempt to confirm it cidr=192.168.88.128/26 host="localhost" Jan 28 01:53:02.935024 containerd[1609]: 2026-01-28 01:53:01.826 [ERROR][5330] ipam/customresource.go 184: Error updating resource Key=BlockAffinity(localhost-192-168-88-128-26) Name="localhost-192-168-88-128-26" Resource="BlockAffinities" Value=&v3.BlockAffinity{TypeMeta:v1.TypeMeta{Kind:"BlockAffinity", APIVersion:"crd.projectcalico.org/v1"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-192-168-88-128-26", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"1551", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.BlockAffinitySpec{State:"pending", Node:"localhost", Type:"host", CIDR:"192.168.88.128/26", Deleted:"false"}} error=Operation cannot be fulfilled on blockaffinities.crd.projectcalico.org "localhost-192-168-88-128-26": the object has been modified; please apply your changes to the latest version and try again Jan 28 01:53:02.935443 containerd[1609]: 2026-01-28 01:53:01.832 [WARNING][5330] ipam/ipam.go 212: Error marking affinity as pending as part of confirmation process cidr=192.168.88.128/26 error=update conflict: BlockAffinity(localhost-192-168-88-128-26) host="localhost" Jan 28 01:53:02.935443 containerd[1609]: 2026-01-28 01:53:01.833 [INFO][5330] ipam/ipam_block_reader_writer.go 175: Trying to create affinity in pending state host="localhost" subnet=192.168.88.128/26 Jan 28 01:53:02.935443 containerd[1609]: 2026-01-28 01:53:01.887 [INFO][5330] ipam/ipam_block_reader_writer.go 186: Block affinity already exists, getting existing affinity host="localhost" subnet=192.168.88.128/26 Jan 28 01:53:02.935443 containerd[1609]: 2026-01-28 01:53:01.931 [INFO][5330] ipam/ipam_block_reader_writer.go 194: Got existing affinity host="localhost" subnet=192.168.88.128/26 Jan 28 01:53:02.935443 containerd[1609]: 2026-01-28 01:53:01.943 [INFO][5330] ipam/ipam_block_reader_writer.go 202: Existing affinity is already confirmed host="localhost" subnet=192.168.88.128/26 Jan 28 01:53:02.935443 containerd[1609]: 2026-01-28 01:53:01.954 [INFO][5330] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:53:02.935443 containerd[1609]: 2026-01-28 01:53:02.068 [INFO][5330] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:53:02.935443 containerd[1609]: 2026-01-28 01:53:02.068 [INFO][5330] ipam/ipam.go 607: Block '192.168.88.128/26' has 64 free ips which is more than 1 ips required. host="localhost" subnet=192.168.88.128/26 Jan 28 01:53:02.935443 containerd[1609]: 2026-01-28 01:53:02.068 [INFO][5330] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1a06a69ed842454f7b4a8690431a86c6694ed54ad50513664b73b4f8fa09189a" host="localhost" Jan 28 01:53:02.935443 containerd[1609]: 2026-01-28 01:53:02.101 [INFO][5330] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1a06a69ed842454f7b4a8690431a86c6694ed54ad50513664b73b4f8fa09189a Jan 28 01:53:02.935443 containerd[1609]: 2026-01-28 01:53:02.146 [INFO][5330] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1a06a69ed842454f7b4a8690431a86c6694ed54ad50513664b73b4f8fa09189a" host="localhost" Jan 28 01:53:02.937997 containerd[1609]: 2026-01-28 01:53:02.186 [ERROR][5330] ipam/customresource.go 184: Error updating resource Key=IPAMBlock(192-168-88-128-26) Name="192-168-88-128-26" Resource="IPAMBlocks" Value=&v3.IPAMBlock{TypeMeta:v1.TypeMeta{Kind:"IPAMBlock", APIVersion:"crd.projectcalico.org/v1"}, ObjectMeta:v1.ObjectMeta{Name:"192-168-88-128-26", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"1552", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.IPAMBlockSpec{CIDR:"192.168.88.128/26", Affinity:(*string)(0xc000540640), Allocations:[]*int{(*int)(0xc00039aab0), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil)}, Unallocated:[]int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63}, Attributes:[]v3.AllocationAttribute{v3.AllocationAttribute{AttrPrimary:(*string)(0xc0004eb220), AttrSecondary:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-gcgtc", "timestamp":"2026-01-28 01:53:01.062527637 +0000 UTC"}}}, SequenceNumber:0x188ec22839d43416, SequenceNumberForAllocation:map[string]uint64{"0":0x188ec22839d43415}, Deleted:false, DeprecatedStrictAffinity:false}} error=Operation cannot be fulfilled on ipamblocks.crd.projectcalico.org "192-168-88-128-26": the object has been modified; please apply your changes to the latest version and try again Jan 28 01:53:02.937997 containerd[1609]: 2026-01-28 01:53:02.186 [INFO][5330] ipam/ipam.go 1250: Failed to update block block=192.168.88.128/26 error=update conflict: IPAMBlock(192-168-88-128-26) handle="k8s-pod-network.1a06a69ed842454f7b4a8690431a86c6694ed54ad50513664b73b4f8fa09189a" host="localhost" Jan 28 01:53:02.937997 containerd[1609]: 2026-01-28 01:53:02.363 [INFO][5330] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1a06a69ed842454f7b4a8690431a86c6694ed54ad50513664b73b4f8fa09189a" host="localhost" Jan 28 01:53:02.937997 containerd[1609]: 2026-01-28 01:53:02.372 [INFO][5330] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1a06a69ed842454f7b4a8690431a86c6694ed54ad50513664b73b4f8fa09189a Jan 28 01:53:02.937997 containerd[1609]: 2026-01-28 01:53:02.414 [INFO][5330] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1a06a69ed842454f7b4a8690431a86c6694ed54ad50513664b73b4f8fa09189a" host="localhost" Jan 28 01:53:02.937997 containerd[1609]: 2026-01-28 01:53:02.467 [INFO][5330] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.1a06a69ed842454f7b4a8690431a86c6694ed54ad50513664b73b4f8fa09189a" host="localhost" Jan 28 01:53:02.937997 containerd[1609]: 2026-01-28 01:53:02.467 [INFO][5330] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.1a06a69ed842454f7b4a8690431a86c6694ed54ad50513664b73b4f8fa09189a" host="localhost" Jan 28 01:53:02.937997 containerd[1609]: 2026-01-28 01:53:02.467 [INFO][5330] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:53:02.937997 containerd[1609]: 2026-01-28 01:53:02.467 [INFO][5330] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="1a06a69ed842454f7b4a8690431a86c6694ed54ad50513664b73b4f8fa09189a" HandleID="k8s-pod-network.1a06a69ed842454f7b4a8690431a86c6694ed54ad50513664b73b4f8fa09189a" Workload="localhost-k8s-coredns--674b8bbfcf--gcgtc-eth0" Jan 28 01:53:02.938525 containerd[1609]: 2026-01-28 01:53:02.479 [INFO][5258] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1a06a69ed842454f7b4a8690431a86c6694ed54ad50513664b73b4f8fa09189a" Namespace="kube-system" Pod="coredns-674b8bbfcf-gcgtc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gcgtc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--gcgtc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"95f14950-b00b-4ddf-81a4-ed49d84ddcff", ResourceVersion:"1265", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 48, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-gcgtc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5d618e96467", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:53:02.938525 containerd[1609]: 2026-01-28 01:53:02.480 [INFO][5258] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="1a06a69ed842454f7b4a8690431a86c6694ed54ad50513664b73b4f8fa09189a" Namespace="kube-system" Pod="coredns-674b8bbfcf-gcgtc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gcgtc-eth0" Jan 28 01:53:02.938525 containerd[1609]: 2026-01-28 01:53:02.480 [INFO][5258] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5d618e96467 ContainerID="1a06a69ed842454f7b4a8690431a86c6694ed54ad50513664b73b4f8fa09189a" Namespace="kube-system" Pod="coredns-674b8bbfcf-gcgtc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gcgtc-eth0" Jan 28 01:53:02.938525 containerd[1609]: 2026-01-28 01:53:02.734 [INFO][5258] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a06a69ed842454f7b4a8690431a86c6694ed54ad50513664b73b4f8fa09189a" Namespace="kube-system" Pod="coredns-674b8bbfcf-gcgtc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gcgtc-eth0" Jan 28 01:53:02.938525 containerd[1609]: 2026-01-28 01:53:02.735 [INFO][5258] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1a06a69ed842454f7b4a8690431a86c6694ed54ad50513664b73b4f8fa09189a" Namespace="kube-system" Pod="coredns-674b8bbfcf-gcgtc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gcgtc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--gcgtc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"95f14950-b00b-4ddf-81a4-ed49d84ddcff", ResourceVersion:"1265", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 48, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1a06a69ed842454f7b4a8690431a86c6694ed54ad50513664b73b4f8fa09189a", Pod:"coredns-674b8bbfcf-gcgtc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5d618e96467", MAC:"fa:ca:6e:d9:97:b6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:53:02.938525 containerd[1609]: 2026-01-28 01:53:02.893 [INFO][5258] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1a06a69ed842454f7b4a8690431a86c6694ed54ad50513664b73b4f8fa09189a" Namespace="kube-system" Pod="coredns-674b8bbfcf-gcgtc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gcgtc-eth0" Jan 28 01:53:03.029642 systemd-networkd[1515]: cali1122c355f02: Link UP Jan 28 01:53:03.147313 systemd-networkd[1515]: cali1122c355f02: Gained carrier Jan 28 01:53:03.404139 containerd[1609]: 2026-01-28 01:52:59.900 [INFO][5263] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 01:53:03.404139 containerd[1609]: 2026-01-28 01:53:00.029 [INFO][5263] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--nv2sz-eth0 goldmane-666569f655- calico-system be8a6b52-634d-45dc-a492-0c042b64c6df 1268 0 2026-01-28 01:49:47 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-nv2sz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1122c355f02 [] [] }} ContainerID="79aa262556f27f12e6145b5d454acc06ea81ff31a7e90a91996bdae98861cd44" Namespace="calico-system" Pod="goldmane-666569f655-nv2sz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nv2sz-" Jan 28 01:53:03.404139 containerd[1609]: 2026-01-28 01:53:00.029 [INFO][5263] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="79aa262556f27f12e6145b5d454acc06ea81ff31a7e90a91996bdae98861cd44" Namespace="calico-system" Pod="goldmane-666569f655-nv2sz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nv2sz-eth0" Jan 28 01:53:03.404139 containerd[1609]: 2026-01-28 01:53:01.071 [INFO][5329] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="79aa262556f27f12e6145b5d454acc06ea81ff31a7e90a91996bdae98861cd44" HandleID="k8s-pod-network.79aa262556f27f12e6145b5d454acc06ea81ff31a7e90a91996bdae98861cd44" Workload="localhost-k8s-goldmane--666569f655--nv2sz-eth0" Jan 28 01:53:03.404139 containerd[1609]: 2026-01-28 01:53:01.072 [INFO][5329] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="79aa262556f27f12e6145b5d454acc06ea81ff31a7e90a91996bdae98861cd44" HandleID="k8s-pod-network.79aa262556f27f12e6145b5d454acc06ea81ff31a7e90a91996bdae98861cd44" Workload="localhost-k8s-goldmane--666569f655--nv2sz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000bf9c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-nv2sz", "timestamp":"2026-01-28 01:53:01.07155012 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:53:03.404139 containerd[1609]: 2026-01-28 01:53:01.072 [INFO][5329] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:53:03.404139 containerd[1609]: 2026-01-28 01:53:02.469 [INFO][5329] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:53:03.404139 containerd[1609]: 2026-01-28 01:53:02.471 [INFO][5329] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:53:03.404139 containerd[1609]: 2026-01-28 01:53:02.506 [INFO][5329] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.79aa262556f27f12e6145b5d454acc06ea81ff31a7e90a91996bdae98861cd44" host="localhost" Jan 28 01:53:03.404139 containerd[1609]: 2026-01-28 01:53:02.532 [INFO][5329] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:53:03.404139 containerd[1609]: 2026-01-28 01:53:02.606 [INFO][5329] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:53:03.404139 containerd[1609]: 2026-01-28 01:53:02.627 [INFO][5329] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:53:03.404139 containerd[1609]: 2026-01-28 01:53:02.723 [INFO][5329] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:53:03.404139 containerd[1609]: 2026-01-28 01:53:02.724 [INFO][5329] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.79aa262556f27f12e6145b5d454acc06ea81ff31a7e90a91996bdae98861cd44" host="localhost" Jan 28 01:53:03.404139 containerd[1609]: 2026-01-28 01:53:02.764 [INFO][5329] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.79aa262556f27f12e6145b5d454acc06ea81ff31a7e90a91996bdae98861cd44 Jan 28 01:53:03.404139 containerd[1609]: 2026-01-28 01:53:02.858 [INFO][5329] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.79aa262556f27f12e6145b5d454acc06ea81ff31a7e90a91996bdae98861cd44" host="localhost" Jan 28 01:53:03.404139 containerd[1609]: 2026-01-28 01:53:02.994 [INFO][5329] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.79aa262556f27f12e6145b5d454acc06ea81ff31a7e90a91996bdae98861cd44" host="localhost" Jan 28 01:53:03.404139 containerd[1609]: 2026-01-28 01:53:02.994 [INFO][5329] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.79aa262556f27f12e6145b5d454acc06ea81ff31a7e90a91996bdae98861cd44" host="localhost" Jan 28 01:53:03.404139 containerd[1609]: 2026-01-28 01:53:02.994 [INFO][5329] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:53:03.404139 containerd[1609]: 2026-01-28 01:53:02.994 [INFO][5329] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="79aa262556f27f12e6145b5d454acc06ea81ff31a7e90a91996bdae98861cd44" HandleID="k8s-pod-network.79aa262556f27f12e6145b5d454acc06ea81ff31a7e90a91996bdae98861cd44" Workload="localhost-k8s-goldmane--666569f655--nv2sz-eth0" Jan 28 01:53:03.432078 containerd[1609]: 2026-01-28 01:53:03.007 [INFO][5263] cni-plugin/k8s.go 418: Populated endpoint ContainerID="79aa262556f27f12e6145b5d454acc06ea81ff31a7e90a91996bdae98861cd44" Namespace="calico-system" Pod="goldmane-666569f655-nv2sz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nv2sz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--nv2sz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"be8a6b52-634d-45dc-a492-0c042b64c6df", ResourceVersion:"1268", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 49, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-nv2sz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1122c355f02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:53:03.432078 containerd[1609]: 2026-01-28 01:53:03.007 [INFO][5263] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="79aa262556f27f12e6145b5d454acc06ea81ff31a7e90a91996bdae98861cd44" Namespace="calico-system" Pod="goldmane-666569f655-nv2sz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nv2sz-eth0" Jan 28 01:53:03.432078 containerd[1609]: 2026-01-28 01:53:03.007 [INFO][5263] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1122c355f02 ContainerID="79aa262556f27f12e6145b5d454acc06ea81ff31a7e90a91996bdae98861cd44" Namespace="calico-system" Pod="goldmane-666569f655-nv2sz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nv2sz-eth0" Jan 28 01:53:03.432078 containerd[1609]: 2026-01-28 01:53:03.133 [INFO][5263] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="79aa262556f27f12e6145b5d454acc06ea81ff31a7e90a91996bdae98861cd44" Namespace="calico-system" Pod="goldmane-666569f655-nv2sz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nv2sz-eth0" Jan 28 01:53:03.432078 containerd[1609]: 2026-01-28 01:53:03.159 [INFO][5263] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="79aa262556f27f12e6145b5d454acc06ea81ff31a7e90a91996bdae98861cd44" Namespace="calico-system" Pod="goldmane-666569f655-nv2sz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nv2sz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--nv2sz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"be8a6b52-634d-45dc-a492-0c042b64c6df", ResourceVersion:"1268", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 49, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"79aa262556f27f12e6145b5d454acc06ea81ff31a7e90a91996bdae98861cd44", Pod:"goldmane-666569f655-nv2sz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1122c355f02", MAC:"0a:0b:bf:90:a2:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:53:03.432078 containerd[1609]: 2026-01-28 01:53:03.362 [INFO][5263] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="79aa262556f27f12e6145b5d454acc06ea81ff31a7e90a91996bdae98861cd44" Namespace="calico-system" Pod="goldmane-666569f655-nv2sz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nv2sz-eth0" Jan 28 01:53:03.495863 containerd[1609]: time="2026-01-28T01:53:03.278572216Z" level=info msg="container event discarded" container=c11ac79f860d05ec46ec4b4c9e6a1ed1b5ab811103337da6e739ad25cccf8ab1 type=CONTAINER_CREATED_EVENT Jan 28 01:53:03.495863 containerd[1609]: time="2026-01-28T01:53:03.495794083Z" level=info msg="container event discarded" container=c11ac79f860d05ec46ec4b4c9e6a1ed1b5ab811103337da6e739ad25cccf8ab1 type=CONTAINER_STARTED_EVENT Jan 28 01:53:03.776805 containerd[1609]: time="2026-01-28T01:53:03.776747502Z" level=info msg="container event discarded" container=caca198c12616520c9d93ddabe0acd10628c004609f8622a0c2830fe8a8b0689 type=CONTAINER_CREATED_EVENT Jan 28 01:53:03.777614 containerd[1609]: time="2026-01-28T01:53:03.777286871Z" level=info msg="container event discarded" container=caca198c12616520c9d93ddabe0acd10628c004609f8622a0c2830fe8a8b0689 type=CONTAINER_STARTED_EVENT Jan 28 01:53:03.827309 containerd[1609]: time="2026-01-28T01:53:03.827238292Z" level=info msg="container event discarded" container=cd5552c8c00760aa608d4a148126f9c2f58d42c8737025f8bd979a2bf6fdf17e type=CONTAINER_CREATED_EVENT Jan 28 01:53:03.827599 containerd[1609]: time="2026-01-28T01:53:03.827570659Z" level=info msg="container event discarded" container=cd5552c8c00760aa608d4a148126f9c2f58d42c8737025f8bd979a2bf6fdf17e type=CONTAINER_STARTED_EVENT Jan 28 01:53:03.938460 containerd[1609]: time="2026-01-28T01:53:03.938287392Z" level=info msg="container event discarded" container=d18be323ce7bdfd7fda9d8afdb4921d85d979fadff7d33a4fe2125678c17f85d type=CONTAINER_CREATED_EVENT Jan 28 01:53:04.104031 containerd[1609]: time="2026-01-28T01:53:04.103327286Z" level=info msg="container event discarded" container=ae3e5280e63e23ec052efca27314281cd077747028554918ed713ea4fbb51fa8 type=CONTAINER_CREATED_EVENT Jan 28 01:53:04.133511 containerd[1609]: time="2026-01-28T01:53:04.119957710Z" level=info msg="container event discarded" container=2f10dc0975b1cd21acae00f371fed84998a86edf5382e1bd3d0830c0022baa2c type=CONTAINER_CREATED_EVENT Jan 28 01:53:04.273494 systemd-networkd[1515]: cali70a9086c1e5: Link UP Jan 28 01:53:04.315960 containerd[1609]: time="2026-01-28T01:53:04.315903156Z" level=info msg="connecting to shim 79aa262556f27f12e6145b5d454acc06ea81ff31a7e90a91996bdae98861cd44" address="unix:///run/containerd/s/09db8ed042975d7a96ef72fe644fbcc44cac3c33c4128a0bdcc1c55fde18cfbb" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:53:04.322577 systemd-networkd[1515]: cali70a9086c1e5: Gained carrier Jan 28 01:53:04.469087 systemd-networkd[1515]: cali5d618e96467: Gained IPv6LL Jan 28 01:53:04.525788 containerd[1609]: time="2026-01-28T01:53:04.522849343Z" level=info msg="connecting to shim 1a06a69ed842454f7b4a8690431a86c6694ed54ad50513664b73b4f8fa09189a" address="unix:///run/containerd/s/6fbf3eade66be73caf5b5dbba77e453d1c7a63f3591ad9f9508214198c6565f1" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:53:04.632944 systemd[1]: Started sshd@11-10.0.0.85:22-10.0.0.1:38450.service - OpenSSH per-connection server daemon (10.0.0.1:38450). Jan 28 01:53:04.646331 kernel: kauditd_printk_skb: 16 callbacks suppressed Jan 28 01:53:04.646466 kernel: audit: type=1130 audit(1769565184.626:654): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.85:22-10.0.0.1:38450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:53:04.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.85:22-10.0.0.1:38450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:53:04.814248 containerd[1609]: 2026-01-28 01:53:01.741 [INFO][5371] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 01:53:04.814248 containerd[1609]: 2026-01-28 01:53:01.832 [INFO][5371] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7fb5cb5d8--9zmvs-eth0 whisker-7fb5cb5d8- calico-system f9057416-92cd-485c-b269-9b046834d5f3 1548 0 2026-01-28 01:53:00 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7fb5cb5d8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7fb5cb5d8-9zmvs eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali70a9086c1e5 [] [] }} ContainerID="eae35b934e74ee3fe543d9ccb88fbbb467205fe70525fb65e1bdc44b893f3cc9" Namespace="calico-system" Pod="whisker-7fb5cb5d8-9zmvs" WorkloadEndpoint="localhost-k8s-whisker--7fb5cb5d8--9zmvs-" Jan 28 01:53:04.814248 containerd[1609]: 2026-01-28 01:53:01.837 [INFO][5371] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eae35b934e74ee3fe543d9ccb88fbbb467205fe70525fb65e1bdc44b893f3cc9" Namespace="calico-system" Pod="whisker-7fb5cb5d8-9zmvs" WorkloadEndpoint="localhost-k8s-whisker--7fb5cb5d8--9zmvs-eth0" Jan 28 01:53:04.814248 containerd[1609]: 2026-01-28 01:53:02.243 [INFO][5387] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eae35b934e74ee3fe543d9ccb88fbbb467205fe70525fb65e1bdc44b893f3cc9" HandleID="k8s-pod-network.eae35b934e74ee3fe543d9ccb88fbbb467205fe70525fb65e1bdc44b893f3cc9" Workload="localhost-k8s-whisker--7fb5cb5d8--9zmvs-eth0" Jan 28 01:53:04.814248 containerd[1609]: 2026-01-28 01:53:02.244 [INFO][5387] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="eae35b934e74ee3fe543d9ccb88fbbb467205fe70525fb65e1bdc44b893f3cc9" HandleID="k8s-pod-network.eae35b934e74ee3fe543d9ccb88fbbb467205fe70525fb65e1bdc44b893f3cc9" Workload="localhost-k8s-whisker--7fb5cb5d8--9zmvs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002666c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7fb5cb5d8-9zmvs", "timestamp":"2026-01-28 01:53:02.243111955 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:53:04.814248 containerd[1609]: 2026-01-28 01:53:02.244 [INFO][5387] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:53:04.814248 containerd[1609]: 2026-01-28 01:53:02.994 [INFO][5387] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:53:04.814248 containerd[1609]: 2026-01-28 01:53:02.995 [INFO][5387] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:53:04.814248 containerd[1609]: 2026-01-28 01:53:03.117 [INFO][5387] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eae35b934e74ee3fe543d9ccb88fbbb467205fe70525fb65e1bdc44b893f3cc9" host="localhost" Jan 28 01:53:04.814248 containerd[1609]: 2026-01-28 01:53:03.286 [INFO][5387] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:53:04.814248 containerd[1609]: 2026-01-28 01:53:03.490 [INFO][5387] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:53:04.814248 containerd[1609]: 2026-01-28 01:53:03.558 [INFO][5387] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:53:04.814248 containerd[1609]: 2026-01-28 01:53:03.641 [INFO][5387] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:53:04.814248 containerd[1609]: 2026-01-28 01:53:03.702 [INFO][5387] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eae35b934e74ee3fe543d9ccb88fbbb467205fe70525fb65e1bdc44b893f3cc9" host="localhost" Jan 28 01:53:04.814248 containerd[1609]: 2026-01-28 01:53:03.754 [INFO][5387] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.eae35b934e74ee3fe543d9ccb88fbbb467205fe70525fb65e1bdc44b893f3cc9 Jan 28 01:53:04.814248 containerd[1609]: 2026-01-28 01:53:03.862 [INFO][5387] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eae35b934e74ee3fe543d9ccb88fbbb467205fe70525fb65e1bdc44b893f3cc9" host="localhost" Jan 28 01:53:04.814248 containerd[1609]: 2026-01-28 01:53:04.122 [INFO][5387] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.eae35b934e74ee3fe543d9ccb88fbbb467205fe70525fb65e1bdc44b893f3cc9" host="localhost" Jan 28 01:53:04.814248 containerd[1609]: 2026-01-28 01:53:04.128 [INFO][5387] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.eae35b934e74ee3fe543d9ccb88fbbb467205fe70525fb65e1bdc44b893f3cc9" host="localhost" Jan 28 01:53:04.814248 containerd[1609]: 2026-01-28 01:53:04.130 [INFO][5387] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:53:04.814248 containerd[1609]: 2026-01-28 01:53:04.130 [INFO][5387] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="eae35b934e74ee3fe543d9ccb88fbbb467205fe70525fb65e1bdc44b893f3cc9" HandleID="k8s-pod-network.eae35b934e74ee3fe543d9ccb88fbbb467205fe70525fb65e1bdc44b893f3cc9" Workload="localhost-k8s-whisker--7fb5cb5d8--9zmvs-eth0" Jan 28 01:53:04.816337 containerd[1609]: 2026-01-28 01:53:04.151 [INFO][5371] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eae35b934e74ee3fe543d9ccb88fbbb467205fe70525fb65e1bdc44b893f3cc9" Namespace="calico-system" Pod="whisker-7fb5cb5d8-9zmvs" WorkloadEndpoint="localhost-k8s-whisker--7fb5cb5d8--9zmvs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7fb5cb5d8--9zmvs-eth0", GenerateName:"whisker-7fb5cb5d8-", Namespace:"calico-system", SelfLink:"", UID:"f9057416-92cd-485c-b269-9b046834d5f3", ResourceVersion:"1548", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 53, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7fb5cb5d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7fb5cb5d8-9zmvs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali70a9086c1e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:53:04.816337 containerd[1609]: 2026-01-28 01:53:04.151 [INFO][5371] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="eae35b934e74ee3fe543d9ccb88fbbb467205fe70525fb65e1bdc44b893f3cc9" Namespace="calico-system" Pod="whisker-7fb5cb5d8-9zmvs" WorkloadEndpoint="localhost-k8s-whisker--7fb5cb5d8--9zmvs-eth0" Jan 28 01:53:04.816337 containerd[1609]: 2026-01-28 01:53:04.151 [INFO][5371] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali70a9086c1e5 ContainerID="eae35b934e74ee3fe543d9ccb88fbbb467205fe70525fb65e1bdc44b893f3cc9" Namespace="calico-system" Pod="whisker-7fb5cb5d8-9zmvs" WorkloadEndpoint="localhost-k8s-whisker--7fb5cb5d8--9zmvs-eth0" Jan 28 01:53:04.816337 containerd[1609]: 2026-01-28 01:53:04.380 [INFO][5371] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eae35b934e74ee3fe543d9ccb88fbbb467205fe70525fb65e1bdc44b893f3cc9" Namespace="calico-system" Pod="whisker-7fb5cb5d8-9zmvs" WorkloadEndpoint="localhost-k8s-whisker--7fb5cb5d8--9zmvs-eth0" Jan 28 01:53:04.816337 containerd[1609]: 2026-01-28 01:53:04.381 [INFO][5371] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eae35b934e74ee3fe543d9ccb88fbbb467205fe70525fb65e1bdc44b893f3cc9" Namespace="calico-system" Pod="whisker-7fb5cb5d8-9zmvs" WorkloadEndpoint="localhost-k8s-whisker--7fb5cb5d8--9zmvs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7fb5cb5d8--9zmvs-eth0", GenerateName:"whisker-7fb5cb5d8-", Namespace:"calico-system", SelfLink:"", UID:"f9057416-92cd-485c-b269-9b046834d5f3", ResourceVersion:"1548", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 53, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7fb5cb5d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eae35b934e74ee3fe543d9ccb88fbbb467205fe70525fb65e1bdc44b893f3cc9", Pod:"whisker-7fb5cb5d8-9zmvs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali70a9086c1e5", MAC:"72:95:c8:e3:e3:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:53:04.816337 containerd[1609]: 2026-01-28 01:53:04.495 [INFO][5371] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eae35b934e74ee3fe543d9ccb88fbbb467205fe70525fb65e1bdc44b893f3cc9" Namespace="calico-system" Pod="whisker-7fb5cb5d8-9zmvs" WorkloadEndpoint="localhost-k8s-whisker--7fb5cb5d8--9zmvs-eth0" Jan 28 01:53:04.912069 systemd-networkd[1515]: cali1122c355f02: Gained IPv6LL Jan 28 01:53:05.187231 systemd[1]: Started cri-containerd-79aa262556f27f12e6145b5d454acc06ea81ff31a7e90a91996bdae98861cd44.scope - libcontainer container 79aa262556f27f12e6145b5d454acc06ea81ff31a7e90a91996bdae98861cd44. Jan 28 01:53:05.228000 audit[5557]: USER_ACCT pid=5557 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:05.242246 sshd-session[5557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:53:05.255505 sshd[5557]: Accepted publickey for core from 10.0.0.1 port 38450 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:53:05.283469 systemd-logind[1586]: New session 13 of user core. Jan 28 01:53:05.284117 kernel: audit: type=1101 audit(1769565185.228:655): pid=5557 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:05.302129 systemd[1]: Started cri-containerd-1a06a69ed842454f7b4a8690431a86c6694ed54ad50513664b73b4f8fa09189a.scope - libcontainer container 1a06a69ed842454f7b4a8690431a86c6694ed54ad50513664b73b4f8fa09189a. Jan 28 01:53:05.351627 kernel: audit: type=1103 audit(1769565185.238:656): pid=5557 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:05.238000 audit[5557]: CRED_ACQ pid=5557 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:05.317944 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 28 01:53:05.386621 kernel: audit: type=1006 audit(1769565185.238:657): pid=5557 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 28 01:53:05.238000 audit[5557]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffffeef17c0 a2=3 a3=0 items=0 ppid=1 pid=5557 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:05.427503 kernel: audit: type=1300 audit(1769565185.238:657): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffffeef17c0 a2=3 a3=0 items=0 ppid=1 pid=5557 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:05.238000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:53:05.471223 kernel: audit: type=1327 audit(1769565185.238:657): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:53:05.478872 containerd[1609]: time="2026-01-28T01:53:05.477628495Z" level=info msg="container event discarded" container=ae3e5280e63e23ec052efca27314281cd077747028554918ed713ea4fbb51fa8 type=CONTAINER_STARTED_EVENT Jan 28 01:53:05.351000 audit[5557]: USER_START pid=5557 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:05.533306 containerd[1609]: time="2026-01-28T01:53:05.510254653Z" level=info msg="connecting to shim eae35b934e74ee3fe543d9ccb88fbbb467205fe70525fb65e1bdc44b893f3cc9" address="unix:///run/containerd/s/07c34513750c22877e6bcc0daddee388012937407948394f7b6593256a120456" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:53:05.534752 kernel: audit: type=1105 audit(1769565185.351:658): pid=5557 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:05.372000 audit[5607]: CRED_ACQ pid=5607 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:05.588848 kernel: audit: type=1103 audit(1769565185.372:659): pid=5607 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:05.605881 containerd[1609]: time="2026-01-28T01:53:05.604469894Z" level=info msg="container event discarded" container=d18be323ce7bdfd7fda9d8afdb4921d85d979fadff7d33a4fe2125678c17f85d type=CONTAINER_STARTED_EVENT Jan 28 01:53:05.729000 audit: BPF prog-id=186 op=LOAD Jan 28 01:53:05.762044 kernel: audit: type=1334 audit(1769565185.729:660): prog-id=186 op=LOAD Jan 28 01:53:05.830202 kernel: audit: type=1334 audit(1769565185.789:661): prog-id=187 op=LOAD Jan 28 01:53:05.789000 audit: BPF prog-id=187 op=LOAD Jan 28 01:53:05.789000 audit[5566]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5524 pid=5566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:05.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739616132363235353666323766313265363134356235643435346163 Jan 28 01:53:05.789000 audit: BPF prog-id=187 op=UNLOAD Jan 28 01:53:05.789000 audit[5566]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5524 pid=5566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:05.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739616132363235353666323766313265363134356235643435346163 Jan 28 01:53:05.795000 audit: BPF prog-id=188 op=LOAD Jan 28 01:53:05.795000 audit[5566]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5524 pid=5566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:05.795000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739616132363235353666323766313265363134356235643435346163 Jan 28 01:53:05.795000 audit: BPF prog-id=189 op=LOAD Jan 28 01:53:05.795000 audit[5566]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5524 pid=5566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:05.795000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739616132363235353666323766313265363134356235643435346163 Jan 28 01:53:05.795000 audit: BPF prog-id=189 op=UNLOAD Jan 28 01:53:05.795000 audit[5566]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5524 pid=5566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:05.795000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739616132363235353666323766313265363134356235643435346163 Jan 28 01:53:05.795000 audit: BPF prog-id=188 op=UNLOAD Jan 28 01:53:05.795000 audit[5566]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5524 pid=5566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:05.795000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739616132363235353666323766313265363134356235643435346163 Jan 28 01:53:05.795000 audit: BPF prog-id=190 op=LOAD Jan 28 01:53:05.795000 audit[5566]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5524 pid=5566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:05.795000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739616132363235353666323766313265363134356235643435346163 Jan 28 01:53:05.809370 systemd-networkd[1515]: cali70a9086c1e5: Gained IPv6LL Jan 28 01:53:05.841000 audit: BPF prog-id=191 op=LOAD Jan 28 01:53:05.821323 systemd-resolved[1290]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:53:05.875000 audit: BPF prog-id=192 op=LOAD Jan 28 01:53:05.875000 audit[5579]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=5548 pid=5579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:05.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161303661363965643834323435346637623461383639303433316138 Jan 28 01:53:05.875000 audit: BPF prog-id=192 op=UNLOAD Jan 28 01:53:05.875000 audit[5579]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5548 pid=5579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:05.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161303661363965643834323435346637623461383639303433316138 Jan 28 01:53:05.881000 audit: BPF prog-id=193 op=LOAD Jan 28 01:53:05.881000 audit[5579]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=5548 pid=5579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:05.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161303661363965643834323435346637623461383639303433316138 Jan 28 01:53:05.881000 audit: BPF prog-id=194 op=LOAD Jan 28 01:53:05.881000 audit[5579]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=5548 pid=5579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:05.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161303661363965643834323435346637623461383639303433316138 Jan 28 01:53:05.896000 audit: BPF prog-id=194 op=UNLOAD Jan 28 01:53:05.896000 audit[5579]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5548 pid=5579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:05.896000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161303661363965643834323435346637623461383639303433316138 Jan 28 01:53:05.896000 audit: BPF prog-id=193 op=UNLOAD Jan 28 01:53:05.896000 audit[5579]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5548 pid=5579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:05.896000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161303661363965643834323435346637623461383639303433316138 Jan 28 01:53:05.896000 audit: BPF prog-id=195 op=LOAD Jan 28 01:53:05.896000 audit[5579]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=5548 pid=5579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:05.896000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161303661363965643834323435346637623461383639303433316138 Jan 28 01:53:05.957861 systemd-resolved[1290]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:53:06.109879 containerd[1609]: time="2026-01-28T01:53:06.091047216Z" level=info msg="container event discarded" container=2f10dc0975b1cd21acae00f371fed84998a86edf5382e1bd3d0830c0022baa2c type=CONTAINER_STARTED_EVENT Jan 28 01:53:06.203361 systemd[1]: Started cri-containerd-eae35b934e74ee3fe543d9ccb88fbbb467205fe70525fb65e1bdc44b893f3cc9.scope - libcontainer container eae35b934e74ee3fe543d9ccb88fbbb467205fe70525fb65e1bdc44b893f3cc9. Jan 28 01:53:06.679000 audit: BPF prog-id=196 op=LOAD Jan 28 01:53:06.692909 sshd[5607]: Connection closed by 10.0.0.1 port 38450 Jan 28 01:53:06.703000 audit: BPF prog-id=197 op=LOAD Jan 28 01:53:06.703000 audit[5649]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5627 pid=5649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:06.703000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561653335623933346537346565336665353433643963636238386662 Jan 28 01:53:06.703000 audit: BPF prog-id=197 op=UNLOAD Jan 28 01:53:06.703000 audit[5649]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5627 pid=5649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:06.703000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561653335623933346537346565336665353433643963636238386662 Jan 28 01:53:06.707000 audit: BPF prog-id=198 op=LOAD Jan 28 01:53:06.707000 audit[5649]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5627 pid=5649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:06.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561653335623933346537346565336665353433643963636238386662 Jan 28 01:53:06.715000 audit: BPF prog-id=199 op=LOAD Jan 28 01:53:06.715000 audit[5649]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5627 pid=5649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:06.715000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561653335623933346537346565336665353433643963636238386662 Jan 28 01:53:06.718968 sshd-session[5557]: pam_unix(sshd:session): session closed for user core Jan 28 01:53:06.717000 audit: BPF prog-id=199 op=UNLOAD Jan 28 01:53:06.717000 audit[5649]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5627 pid=5649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:06.717000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561653335623933346537346565336665353433643963636238386662 Jan 28 01:53:06.717000 audit: BPF prog-id=198 op=UNLOAD Jan 28 01:53:06.717000 audit[5649]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5627 pid=5649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:06.717000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561653335623933346537346565336665353433643963636238386662 Jan 28 01:53:06.717000 audit: BPF prog-id=200 op=LOAD Jan 28 01:53:06.717000 audit[5649]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5627 pid=5649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:06.717000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561653335623933346537346565336665353433643963636238386662 Jan 28 01:53:06.758541 systemd-resolved[1290]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:53:06.832000 audit[5557]: USER_END pid=5557 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:06.835000 audit[5557]: CRED_DISP pid=5557 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:06.886225 containerd[1609]: time="2026-01-28T01:53:06.869593993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nv2sz,Uid:be8a6b52-634d-45dc-a492-0c042b64c6df,Namespace:calico-system,Attempt:0,} returns sandbox id \"79aa262556f27f12e6145b5d454acc06ea81ff31a7e90a91996bdae98861cd44\"" Jan 28 01:53:06.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.85:22-10.0.0.1:38450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:53:06.886493 systemd[1]: sshd@11-10.0.0.85:22-10.0.0.1:38450.service: Deactivated successfully. Jan 28 01:53:06.904211 systemd[1]: session-13.scope: Deactivated successfully. Jan 28 01:53:06.966989 containerd[1609]: time="2026-01-28T01:53:06.951500952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:53:07.003826 systemd-logind[1586]: Session 13 logged out. Waiting for processes to exit. Jan 28 01:53:07.030238 systemd-logind[1586]: Removed session 13. Jan 28 01:53:07.263492 containerd[1609]: time="2026-01-28T01:53:07.233104617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654b4ddbfd-mbn64,Uid:ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:53:07.305975 containerd[1609]: time="2026-01-28T01:53:07.305384058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gcgtc,Uid:95f14950-b00b-4ddf-81a4-ed49d84ddcff,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a06a69ed842454f7b4a8690431a86c6694ed54ad50513664b73b4f8fa09189a\"" Jan 28 01:53:07.332019 kubelet[2967]: E0128 01:53:07.330578 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:53:07.373545 containerd[1609]: time="2026-01-28T01:53:07.373212723Z" level=info msg="CreateContainer within sandbox \"1a06a69ed842454f7b4a8690431a86c6694ed54ad50513664b73b4f8fa09189a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 01:53:07.395510 containerd[1609]: time="2026-01-28T01:53:07.395461815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7fb5cb5d8-9zmvs,Uid:f9057416-92cd-485c-b269-9b046834d5f3,Namespace:calico-system,Attempt:0,} returns sandbox id \"eae35b934e74ee3fe543d9ccb88fbbb467205fe70525fb65e1bdc44b893f3cc9\"" Jan 28 01:53:07.438812 containerd[1609]: time="2026-01-28T01:53:07.434124563Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:53:07.491926 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3339960129.mount: Deactivated successfully. Jan 28 01:53:07.493421 kubelet[2967]: E0128 01:53:07.492353 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:53:07.493421 kubelet[2967]: E0128 01:53:07.492410 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:53:07.493558 containerd[1609]: time="2026-01-28T01:53:07.492049210Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:53:07.493558 containerd[1609]: time="2026-01-28T01:53:07.492179791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 01:53:07.494943 kubelet[2967]: E0128 01:53:07.492995 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w48wh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nv2sz_calico-system(be8a6b52-634d-45dc-a492-0c042b64c6df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:53:07.589411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3147099774.mount: Deactivated successfully. Jan 28 01:53:07.611042 kubelet[2967]: E0128 01:53:07.603074 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nv2sz" podUID="be8a6b52-634d-45dc-a492-0c042b64c6df" Jan 28 01:53:07.611111 containerd[1609]: time="2026-01-28T01:53:07.531280781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:53:07.611111 containerd[1609]: time="2026-01-28T01:53:07.580486753Z" level=info msg="Container 2b2442e74922671b04d854b397946fed8bed17cb4e89548493fcb927250c7a57: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:53:07.728153 containerd[1609]: time="2026-01-28T01:53:07.725307422Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:53:07.774336 containerd[1609]: time="2026-01-28T01:53:07.774266096Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:53:07.774768 containerd[1609]: time="2026-01-28T01:53:07.774536107Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 01:53:07.775253 kubelet[2967]: E0128 01:53:07.775209 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:53:07.783065 kubelet[2967]: E0128 01:53:07.783020 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:53:07.785774 kubelet[2967]: E0128 01:53:07.784414 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:11f2d6a54a3d467fbd60c4526f82d473,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2z9qq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7fb5cb5d8-9zmvs_calico-system(f9057416-92cd-485c-b269-9b046834d5f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:53:07.786621 containerd[1609]: time="2026-01-28T01:53:07.786579524Z" level=info msg="CreateContainer within sandbox \"1a06a69ed842454f7b4a8690431a86c6694ed54ad50513664b73b4f8fa09189a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2b2442e74922671b04d854b397946fed8bed17cb4e89548493fcb927250c7a57\"" Jan 28 01:53:07.854897 containerd[1609]: time="2026-01-28T01:53:07.850390639Z" level=info msg="StartContainer for \"2b2442e74922671b04d854b397946fed8bed17cb4e89548493fcb927250c7a57\"" Jan 28 01:53:07.888428 containerd[1609]: time="2026-01-28T01:53:07.887977198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:53:07.929207 kubelet[2967]: E0128 01:53:07.902833 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nv2sz" podUID="be8a6b52-634d-45dc-a492-0c042b64c6df" Jan 28 01:53:08.079555 containerd[1609]: time="2026-01-28T01:53:08.073538025Z" level=info msg="connecting to shim 2b2442e74922671b04d854b397946fed8bed17cb4e89548493fcb927250c7a57" address="unix:///run/containerd/s/6fbf3eade66be73caf5b5dbba77e453d1c7a63f3591ad9f9508214198c6565f1" protocol=ttrpc version=3 Jan 28 01:53:08.427306 containerd[1609]: time="2026-01-28T01:53:08.427249300Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:53:08.613064 kubelet[2967]: E0128 01:53:08.600083 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:53:08.835977 containerd[1609]: time="2026-01-28T01:53:08.835898443Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 01:53:08.876509 containerd[1609]: time="2026-01-28T01:53:08.857410697Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:53:08.876806 kubelet[2967]: E0128 01:53:08.876040 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:53:08.876806 kubelet[2967]: E0128 01:53:08.876099 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:53:08.876806 kubelet[2967]: E0128 01:53:08.876605 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2z9qq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7fb5cb5d8-9zmvs_calico-system(f9057416-92cd-485c-b269-9b046834d5f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:53:08.878589 kubelet[2967]: E0128 01:53:08.878009 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb5cb5d8-9zmvs" podUID="f9057416-92cd-485c-b269-9b046834d5f3" Jan 28 01:53:08.878420 systemd[1]: Started cri-containerd-2b2442e74922671b04d854b397946fed8bed17cb4e89548493fcb927250c7a57.scope - libcontainer container 2b2442e74922671b04d854b397946fed8bed17cb4e89548493fcb927250c7a57. Jan 28 01:53:08.974000 audit: BPF prog-id=201 op=LOAD Jan 28 01:53:08.974000 audit[5765]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe2d8dcc10 a2=98 a3=1fffffffffffffff items=0 ppid=5445 pid=5765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:08.974000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 01:53:08.974000 audit: BPF prog-id=201 op=UNLOAD Jan 28 01:53:08.974000 audit[5765]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe2d8dcbe0 a3=0 items=0 ppid=5445 pid=5765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:08.974000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 01:53:08.974000 audit: BPF prog-id=202 op=LOAD Jan 28 01:53:08.974000 audit[5765]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe2d8dcaf0 a2=94 a3=3 items=0 ppid=5445 pid=5765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:08.974000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 01:53:08.974000 audit: BPF prog-id=202 op=UNLOAD Jan 28 01:53:08.974000 audit[5765]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe2d8dcaf0 a2=94 a3=3 items=0 ppid=5445 pid=5765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:08.974000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 01:53:08.974000 audit: BPF prog-id=203 op=LOAD Jan 28 01:53:08.974000 audit[5765]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe2d8dcb30 a2=94 a3=7ffe2d8dcd10 items=0 ppid=5445 pid=5765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:08.974000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 01:53:08.974000 audit: BPF prog-id=203 op=UNLOAD Jan 28 01:53:08.974000 audit[5765]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe2d8dcb30 a2=94 a3=7ffe2d8dcd10 items=0 ppid=5445 pid=5765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:08.974000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 01:53:08.983000 audit: BPF prog-id=204 op=LOAD Jan 28 01:53:08.983000 audit[5767]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff43dd7ae0 a2=98 a3=3 items=0 ppid=5445 pid=5767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:08.983000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:53:08.983000 audit: BPF prog-id=204 op=UNLOAD Jan 28 01:53:08.983000 audit[5767]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff43dd7ab0 a3=0 items=0 ppid=5445 pid=5767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:08.983000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:53:08.983000 audit: BPF prog-id=205 op=LOAD Jan 28 01:53:08.983000 audit[5767]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff43dd78d0 a2=94 a3=54428f items=0 ppid=5445 pid=5767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:08.983000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:53:08.983000 audit: BPF prog-id=205 op=UNLOAD Jan 28 01:53:08.983000 audit[5767]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff43dd78d0 a2=94 a3=54428f items=0 ppid=5445 pid=5767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:08.983000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:53:08.983000 audit: BPF prog-id=206 op=LOAD Jan 28 01:53:08.983000 audit[5767]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff43dd7900 a2=94 a3=2 items=0 ppid=5445 pid=5767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:08.983000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:53:08.983000 audit: BPF prog-id=206 op=UNLOAD Jan 28 01:53:08.983000 audit[5767]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff43dd7900 a2=0 a3=2 items=0 ppid=5445 pid=5767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:08.983000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:53:09.110000 audit: BPF prog-id=207 op=LOAD Jan 28 01:53:09.118000 audit: BPF prog-id=208 op=LOAD Jan 28 01:53:09.118000 audit[5738]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000230238 a2=98 a3=0 items=0 ppid=5548 pid=5738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:09.118000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262323434326537343932323637316230346438353462333937393436 Jan 28 01:53:09.118000 audit: BPF prog-id=208 op=UNLOAD Jan 28 01:53:09.118000 audit[5738]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5548 pid=5738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:09.118000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262323434326537343932323637316230346438353462333937393436 Jan 28 01:53:09.119000 audit: BPF prog-id=209 op=LOAD Jan 28 01:53:09.119000 audit[5738]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000230488 a2=98 a3=0 items=0 ppid=5548 pid=5738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:09.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262323434326537343932323637316230346438353462333937393436 Jan 28 01:53:09.121000 audit: BPF prog-id=210 op=LOAD Jan 28 01:53:09.121000 audit[5738]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000230218 a2=98 a3=0 items=0 ppid=5548 pid=5738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:09.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262323434326537343932323637316230346438353462333937393436 Jan 28 01:53:09.121000 audit: BPF prog-id=210 op=UNLOAD Jan 28 01:53:09.121000 audit[5738]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5548 pid=5738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:09.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262323434326537343932323637316230346438353462333937393436 Jan 28 01:53:09.121000 audit: BPF prog-id=209 op=UNLOAD Jan 28 01:53:09.121000 audit[5738]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5548 pid=5738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:09.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262323434326537343932323637316230346438353462333937393436 Jan 28 01:53:09.121000 audit: BPF prog-id=211 op=LOAD Jan 28 01:53:09.121000 audit[5738]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002306e8 a2=98 a3=0 items=0 ppid=5548 pid=5738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:09.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262323434326537343932323637316230346438353462333937393436 Jan 28 01:53:09.164000 audit[5772]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=5772 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:53:09.164000 audit[5772]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe8dc70270 a2=0 a3=7ffe8dc7025c items=0 ppid=3078 pid=5772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:09.164000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:53:09.204000 audit[5772]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=5772 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:53:09.204000 audit[5772]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe8dc70270 a2=0 a3=0 items=0 ppid=3078 pid=5772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:09.204000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:53:09.218831 containerd[1609]: time="2026-01-28T01:53:09.218308216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654b4ddbfd-mgclm,Uid:3ef171ed-8146-4d6a-9063-eb31677aa1d4,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:53:09.353957 kubelet[2967]: E0128 01:53:09.349776 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nv2sz" podUID="be8a6b52-634d-45dc-a492-0c042b64c6df" Jan 28 01:53:09.353957 kubelet[2967]: E0128 01:53:09.352276 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb5cb5d8-9zmvs" podUID="f9057416-92cd-485c-b269-9b046834d5f3" Jan 28 01:53:09.778492 containerd[1609]: time="2026-01-28T01:53:09.778289846Z" level=info msg="StartContainer for \"2b2442e74922671b04d854b397946fed8bed17cb4e89548493fcb927250c7a57\" returns successfully" Jan 28 01:53:10.278965 containerd[1609]: time="2026-01-28T01:53:10.276311445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-849fc56f8-v9sqx,Uid:67371941-5272-4e0e-84ef-cf7de9065a57,Namespace:calico-system,Attempt:0,}" Jan 28 01:53:10.278965 containerd[1609]: time="2026-01-28T01:53:10.277231693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ms9md,Uid:d33e070d-1851-4242-98ee-97e68b203245,Namespace:calico-system,Attempt:0,}" Jan 28 01:53:10.389184 kubelet[2967]: E0128 01:53:10.389143 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:53:10.399000 audit[5813]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=5813 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:53:10.415501 kernel: kauditd_printk_skb: 131 callbacks suppressed Jan 28 01:53:10.416805 kernel: audit: type=1325 audit(1769565190.399:709): table=filter:123 family=2 entries=20 op=nft_register_rule pid=5813 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:53:10.399000 audit[5813]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff94169a10 a2=0 a3=7fff941699fc items=0 ppid=3078 pid=5813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:10.506143 kernel: audit: type=1300 audit(1769565190.399:709): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff94169a10 a2=0 a3=7fff941699fc items=0 ppid=3078 pid=5813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:10.399000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:53:10.589785 kernel: audit: type=1327 audit(1769565190.399:709): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:53:10.589980 kernel: audit: type=1325 audit(1769565190.467:710): table=nat:124 family=2 entries=14 op=nft_register_rule pid=5813 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:53:10.467000 audit[5813]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=5813 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:53:10.665142 kernel: audit: type=1300 audit(1769565190.467:710): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff94169a10 a2=0 a3=0 items=0 ppid=3078 pid=5813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:10.467000 audit[5813]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff94169a10 a2=0 a3=0 items=0 ppid=3078 pid=5813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:10.765305 kubelet[2967]: I0128 01:53:10.764371 2967 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-gcgtc" podStartSLOduration=274.764347907 podStartE2EDuration="4m34.764347907s" podCreationTimestamp="2026-01-28 01:48:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:53:10.678221797 +0000 UTC m=+277.283133476" watchObservedRunningTime="2026-01-28 01:53:10.764347907 +0000 UTC m=+277.369259586" Jan 28 01:53:10.467000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:53:10.885779 kernel: audit: type=1327 audit(1769565190.467:710): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:53:11.027000 audit[5843]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=5843 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:53:11.027000 audit[5843]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcad07d0a0 a2=0 a3=7ffcad07d08c items=0 ppid=3078 pid=5843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:11.063982 systemd-networkd[1515]: cali655754d6954: Link UP Jan 28 01:53:11.119086 kernel: audit: type=1325 audit(1769565191.027:711): table=filter:125 family=2 entries=20 op=nft_register_rule pid=5843 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:53:11.119221 kernel: audit: type=1300 audit(1769565191.027:711): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcad07d0a0 a2=0 a3=7ffcad07d08c items=0 ppid=3078 pid=5843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:11.121336 kernel: audit: type=1327 audit(1769565191.027:711): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:53:11.027000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:53:11.131373 systemd-networkd[1515]: cali655754d6954: Gained carrier Jan 28 01:53:11.175354 kernel: audit: type=1325 audit(1769565191.142:712): table=nat:126 family=2 entries=14 op=nft_register_rule pid=5843 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:53:11.142000 audit[5843]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=5843 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:53:11.142000 audit[5843]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcad07d0a0 a2=0 a3=0 items=0 ppid=3078 pid=5843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:11.142000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:53:11.357014 containerd[1609]: 2026-01-28 01:53:08.635 [INFO][5698] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--654b4ddbfd--mbn64-eth0 calico-apiserver-654b4ddbfd- calico-apiserver ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9 1267 0 2026-01-28 01:49:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:654b4ddbfd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-654b4ddbfd-mbn64 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali655754d6954 [] [] }} ContainerID="a2f44c7575cdf31470369af168238ff90920b90e10f426a2be08efde461a1284" Namespace="calico-apiserver" Pod="calico-apiserver-654b4ddbfd-mbn64" WorkloadEndpoint="localhost-k8s-calico--apiserver--654b4ddbfd--mbn64-" Jan 28 01:53:11.357014 containerd[1609]: 2026-01-28 01:53:08.825 [INFO][5698] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a2f44c7575cdf31470369af168238ff90920b90e10f426a2be08efde461a1284" Namespace="calico-apiserver" Pod="calico-apiserver-654b4ddbfd-mbn64" WorkloadEndpoint="localhost-k8s-calico--apiserver--654b4ddbfd--mbn64-eth0" Jan 28 01:53:11.357014 containerd[1609]: 2026-01-28 01:53:09.615 [INFO][5764] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a2f44c7575cdf31470369af168238ff90920b90e10f426a2be08efde461a1284" HandleID="k8s-pod-network.a2f44c7575cdf31470369af168238ff90920b90e10f426a2be08efde461a1284" Workload="localhost-k8s-calico--apiserver--654b4ddbfd--mbn64-eth0" Jan 28 01:53:11.357014 containerd[1609]: 2026-01-28 01:53:09.616 [INFO][5764] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a2f44c7575cdf31470369af168238ff90920b90e10f426a2be08efde461a1284" HandleID="k8s-pod-network.a2f44c7575cdf31470369af168238ff90920b90e10f426a2be08efde461a1284" Workload="localhost-k8s-calico--apiserver--654b4ddbfd--mbn64-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032afe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-654b4ddbfd-mbn64", "timestamp":"2026-01-28 01:53:09.615956179 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:53:11.357014 containerd[1609]: 2026-01-28 01:53:09.616 [INFO][5764] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:53:11.357014 containerd[1609]: 2026-01-28 01:53:09.617 [INFO][5764] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:53:11.357014 containerd[1609]: 2026-01-28 01:53:09.617 [INFO][5764] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:53:11.357014 containerd[1609]: 2026-01-28 01:53:09.812 [INFO][5764] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a2f44c7575cdf31470369af168238ff90920b90e10f426a2be08efde461a1284" host="localhost" Jan 28 01:53:11.357014 containerd[1609]: 2026-01-28 01:53:09.931 [INFO][5764] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:53:11.357014 containerd[1609]: 2026-01-28 01:53:10.193 [INFO][5764] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:53:11.357014 containerd[1609]: 2026-01-28 01:53:10.254 [INFO][5764] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:53:11.357014 containerd[1609]: 2026-01-28 01:53:10.294 [INFO][5764] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:53:11.357014 containerd[1609]: 2026-01-28 01:53:10.294 [INFO][5764] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a2f44c7575cdf31470369af168238ff90920b90e10f426a2be08efde461a1284" host="localhost" Jan 28 01:53:11.357014 containerd[1609]: 2026-01-28 01:53:10.319 [INFO][5764] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a2f44c7575cdf31470369af168238ff90920b90e10f426a2be08efde461a1284 Jan 28 01:53:11.357014 containerd[1609]: 2026-01-28 01:53:10.373 [INFO][5764] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a2f44c7575cdf31470369af168238ff90920b90e10f426a2be08efde461a1284" host="localhost" Jan 28 01:53:11.357014 containerd[1609]: 2026-01-28 01:53:10.559 [INFO][5764] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a2f44c7575cdf31470369af168238ff90920b90e10f426a2be08efde461a1284" host="localhost" Jan 28 01:53:11.357014 containerd[1609]: 2026-01-28 01:53:10.608 [INFO][5764] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a2f44c7575cdf31470369af168238ff90920b90e10f426a2be08efde461a1284" host="localhost" Jan 28 01:53:11.357014 containerd[1609]: 2026-01-28 01:53:10.623 [INFO][5764] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:53:11.357014 containerd[1609]: 2026-01-28 01:53:10.635 [INFO][5764] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a2f44c7575cdf31470369af168238ff90920b90e10f426a2be08efde461a1284" HandleID="k8s-pod-network.a2f44c7575cdf31470369af168238ff90920b90e10f426a2be08efde461a1284" Workload="localhost-k8s-calico--apiserver--654b4ddbfd--mbn64-eth0" Jan 28 01:53:11.387860 containerd[1609]: 2026-01-28 01:53:10.806 [INFO][5698] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a2f44c7575cdf31470369af168238ff90920b90e10f426a2be08efde461a1284" Namespace="calico-apiserver" Pod="calico-apiserver-654b4ddbfd-mbn64" WorkloadEndpoint="localhost-k8s-calico--apiserver--654b4ddbfd--mbn64-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--654b4ddbfd--mbn64-eth0", GenerateName:"calico-apiserver-654b4ddbfd-", Namespace:"calico-apiserver", SelfLink:"", UID:"ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9", ResourceVersion:"1267", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 49, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"654b4ddbfd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-654b4ddbfd-mbn64", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali655754d6954", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:53:11.387860 containerd[1609]: 2026-01-28 01:53:10.806 [INFO][5698] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a2f44c7575cdf31470369af168238ff90920b90e10f426a2be08efde461a1284" Namespace="calico-apiserver" Pod="calico-apiserver-654b4ddbfd-mbn64" WorkloadEndpoint="localhost-k8s-calico--apiserver--654b4ddbfd--mbn64-eth0" Jan 28 01:53:11.387860 containerd[1609]: 2026-01-28 01:53:10.806 [INFO][5698] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali655754d6954 ContainerID="a2f44c7575cdf31470369af168238ff90920b90e10f426a2be08efde461a1284" Namespace="calico-apiserver" Pod="calico-apiserver-654b4ddbfd-mbn64" WorkloadEndpoint="localhost-k8s-calico--apiserver--654b4ddbfd--mbn64-eth0" Jan 28 01:53:11.387860 containerd[1609]: 2026-01-28 01:53:11.129 [INFO][5698] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a2f44c7575cdf31470369af168238ff90920b90e10f426a2be08efde461a1284" Namespace="calico-apiserver" Pod="calico-apiserver-654b4ddbfd-mbn64" WorkloadEndpoint="localhost-k8s-calico--apiserver--654b4ddbfd--mbn64-eth0" Jan 28 01:53:11.387860 containerd[1609]: 2026-01-28 01:53:11.142 [INFO][5698] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a2f44c7575cdf31470369af168238ff90920b90e10f426a2be08efde461a1284" Namespace="calico-apiserver" Pod="calico-apiserver-654b4ddbfd-mbn64" WorkloadEndpoint="localhost-k8s-calico--apiserver--654b4ddbfd--mbn64-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--654b4ddbfd--mbn64-eth0", GenerateName:"calico-apiserver-654b4ddbfd-", Namespace:"calico-apiserver", SelfLink:"", UID:"ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9", ResourceVersion:"1267", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 49, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"654b4ddbfd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a2f44c7575cdf31470369af168238ff90920b90e10f426a2be08efde461a1284", Pod:"calico-apiserver-654b4ddbfd-mbn64", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali655754d6954", MAC:"de:e7:b5:b7:c9:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:53:11.387860 containerd[1609]: 2026-01-28 01:53:11.322 [INFO][5698] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a2f44c7575cdf31470369af168238ff90920b90e10f426a2be08efde461a1284" Namespace="calico-apiserver" Pod="calico-apiserver-654b4ddbfd-mbn64" WorkloadEndpoint="localhost-k8s-calico--apiserver--654b4ddbfd--mbn64-eth0" Jan 28 01:53:11.448259 kubelet[2967]: E0128 01:53:11.430471 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:53:11.514000 audit: BPF prog-id=212 op=LOAD Jan 28 01:53:11.514000 audit[5767]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff43dd77c0 a2=94 a3=1 items=0 ppid=5445 pid=5767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:11.514000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:53:11.518000 audit: BPF prog-id=212 op=UNLOAD Jan 28 01:53:11.518000 audit[5767]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff43dd77c0 a2=94 a3=1 items=0 ppid=5445 pid=5767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:11.518000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:53:11.706000 audit: BPF prog-id=213 op=LOAD Jan 28 01:53:11.706000 audit[5767]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff43dd77b0 a2=94 a3=4 items=0 ppid=5445 pid=5767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:11.706000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:53:11.706000 audit: BPF prog-id=213 op=UNLOAD Jan 28 01:53:11.706000 audit[5767]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff43dd77b0 a2=0 a3=4 items=0 ppid=5445 pid=5767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:11.706000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:53:11.707000 audit: BPF prog-id=214 op=LOAD Jan 28 01:53:11.707000 audit[5767]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff43dd7610 a2=94 a3=5 items=0 ppid=5445 pid=5767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:11.707000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:53:11.707000 audit: BPF prog-id=214 op=UNLOAD Jan 28 01:53:11.707000 audit[5767]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff43dd7610 a2=0 a3=5 items=0 ppid=5445 pid=5767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:11.707000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:53:11.707000 audit: BPF prog-id=215 op=LOAD Jan 28 01:53:11.707000 audit[5767]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff43dd7830 a2=94 a3=6 items=0 ppid=5445 pid=5767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:11.707000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:53:11.707000 audit: BPF prog-id=215 op=UNLOAD Jan 28 01:53:11.707000 audit[5767]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff43dd7830 a2=0 a3=6 items=0 ppid=5445 pid=5767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:11.707000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:53:11.730746 containerd[1609]: time="2026-01-28T01:53:11.730132345Z" level=info msg="connecting to shim a2f44c7575cdf31470369af168238ff90920b90e10f426a2be08efde461a1284" address="unix:///run/containerd/s/6bef76470f2a6e34b1d263ed923d2cc8e6cbb08501902c7f4fee7a43c944ae3a" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:53:11.733000 audit: BPF prog-id=216 op=LOAD Jan 28 01:53:11.733000 audit[5767]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff43dd6fe0 a2=94 a3=88 items=0 ppid=5445 pid=5767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:11.733000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:53:11.735000 audit: BPF prog-id=217 op=LOAD Jan 28 01:53:11.735000 audit[5767]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff43dd6e60 a2=94 a3=2 items=0 ppid=5445 pid=5767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:11.735000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:53:11.735000 audit: BPF prog-id=217 op=UNLOAD Jan 28 01:53:11.735000 audit[5767]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff43dd6e90 a2=0 a3=7fff43dd6f90 items=0 ppid=5445 pid=5767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:11.735000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:53:11.738000 audit: BPF prog-id=216 op=UNLOAD Jan 28 01:53:11.738000 audit[5767]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=2ddf5d10 a2=0 a3=667f4201da73528c items=0 ppid=5445 pid=5767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:11.738000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:53:11.815000 audit: BPF prog-id=218 op=LOAD Jan 28 01:53:11.815000 audit[5903]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffffa173610 a2=98 a3=1999999999999999 items=0 ppid=5445 pid=5903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:11.815000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 01:53:11.817000 audit: BPF prog-id=218 op=UNLOAD Jan 28 01:53:11.817000 audit[5903]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffffa1735e0 a3=0 items=0 ppid=5445 pid=5903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:11.817000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 01:53:11.817000 audit: BPF prog-id=219 op=LOAD Jan 28 01:53:11.817000 audit[5903]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffffa1734f0 a2=94 a3=ffff items=0 ppid=5445 pid=5903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:11.817000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 01:53:11.817000 audit: BPF prog-id=219 op=UNLOAD Jan 28 01:53:11.817000 audit[5903]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffffa1734f0 a2=94 a3=ffff items=0 ppid=5445 pid=5903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:11.817000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 01:53:11.818000 audit: BPF prog-id=220 op=LOAD Jan 28 01:53:11.818000 audit[5903]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffffa173530 a2=94 a3=7ffffa173710 items=0 ppid=5445 pid=5903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:11.818000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 01:53:11.821000 audit: BPF prog-id=220 op=UNLOAD Jan 28 01:53:11.821000 audit[5903]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffffa173530 a2=94 a3=7ffffa173710 items=0 ppid=5445 pid=5903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:11.821000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 01:53:11.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.85:22-10.0.0.1:38452 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:53:11.856428 systemd[1]: Started sshd@12-10.0.0.85:22-10.0.0.1:38452.service - OpenSSH per-connection server daemon (10.0.0.1:38452). Jan 28 01:53:11.873535 systemd-networkd[1515]: cali85d0e4f0a50: Link UP Jan 28 01:53:11.874092 systemd-networkd[1515]: cali85d0e4f0a50: Gained carrier Jan 28 01:53:12.064080 systemd[1]: Started cri-containerd-a2f44c7575cdf31470369af168238ff90920b90e10f426a2be08efde461a1284.scope - libcontainer container a2f44c7575cdf31470369af168238ff90920b90e10f426a2be08efde461a1284. Jan 28 01:53:12.211886 systemd-networkd[1515]: cali655754d6954: Gained IPv6LL Jan 28 01:53:12.233334 containerd[1609]: 2026-01-28 01:53:10.002 [INFO][5777] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--654b4ddbfd--mgclm-eth0 calico-apiserver-654b4ddbfd- calico-apiserver 3ef171ed-8146-4d6a-9063-eb31677aa1d4 1262 0 2026-01-28 01:49:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:654b4ddbfd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-654b4ddbfd-mgclm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali85d0e4f0a50 [] [] }} ContainerID="497187648e1a4623a83de8f8cb8da263e2a60280d4ac34016d3a41c75e0337c3" Namespace="calico-apiserver" Pod="calico-apiserver-654b4ddbfd-mgclm" WorkloadEndpoint="localhost-k8s-calico--apiserver--654b4ddbfd--mgclm-" Jan 28 01:53:12.233334 containerd[1609]: 2026-01-28 01:53:10.003 [INFO][5777] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="497187648e1a4623a83de8f8cb8da263e2a60280d4ac34016d3a41c75e0337c3" Namespace="calico-apiserver" Pod="calico-apiserver-654b4ddbfd-mgclm" WorkloadEndpoint="localhost-k8s-calico--apiserver--654b4ddbfd--mgclm-eth0" Jan 28 01:53:12.233334 containerd[1609]: 2026-01-28 01:53:10.693 [INFO][5806] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="497187648e1a4623a83de8f8cb8da263e2a60280d4ac34016d3a41c75e0337c3" HandleID="k8s-pod-network.497187648e1a4623a83de8f8cb8da263e2a60280d4ac34016d3a41c75e0337c3" Workload="localhost-k8s-calico--apiserver--654b4ddbfd--mgclm-eth0" Jan 28 01:53:12.233334 containerd[1609]: 2026-01-28 01:53:10.698 [INFO][5806] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="497187648e1a4623a83de8f8cb8da263e2a60280d4ac34016d3a41c75e0337c3" HandleID="k8s-pod-network.497187648e1a4623a83de8f8cb8da263e2a60280d4ac34016d3a41c75e0337c3" Workload="localhost-k8s-calico--apiserver--654b4ddbfd--mgclm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003526b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-654b4ddbfd-mgclm", "timestamp":"2026-01-28 01:53:10.693939057 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:53:12.233334 containerd[1609]: 2026-01-28 01:53:10.702 [INFO][5806] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:53:12.233334 containerd[1609]: 2026-01-28 01:53:10.702 [INFO][5806] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:53:12.233334 containerd[1609]: 2026-01-28 01:53:10.702 [INFO][5806] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:53:12.233334 containerd[1609]: 2026-01-28 01:53:10.934 [INFO][5806] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.497187648e1a4623a83de8f8cb8da263e2a60280d4ac34016d3a41c75e0337c3" host="localhost" Jan 28 01:53:12.233334 containerd[1609]: 2026-01-28 01:53:11.125 [INFO][5806] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:53:12.233334 containerd[1609]: 2026-01-28 01:53:11.311 [INFO][5806] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:53:12.233334 containerd[1609]: 2026-01-28 01:53:11.403 [INFO][5806] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:53:12.233334 containerd[1609]: 2026-01-28 01:53:11.506 [INFO][5806] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:53:12.233334 containerd[1609]: 2026-01-28 01:53:11.506 [INFO][5806] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.497187648e1a4623a83de8f8cb8da263e2a60280d4ac34016d3a41c75e0337c3" host="localhost" Jan 28 01:53:12.233334 containerd[1609]: 2026-01-28 01:53:11.543 [INFO][5806] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.497187648e1a4623a83de8f8cb8da263e2a60280d4ac34016d3a41c75e0337c3 Jan 28 01:53:12.233334 containerd[1609]: 2026-01-28 01:53:11.689 [INFO][5806] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.497187648e1a4623a83de8f8cb8da263e2a60280d4ac34016d3a41c75e0337c3" host="localhost" Jan 28 01:53:12.233334 containerd[1609]: 2026-01-28 01:53:11.831 [INFO][5806] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.497187648e1a4623a83de8f8cb8da263e2a60280d4ac34016d3a41c75e0337c3" host="localhost" Jan 28 01:53:12.233334 containerd[1609]: 2026-01-28 01:53:11.832 [INFO][5806] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.497187648e1a4623a83de8f8cb8da263e2a60280d4ac34016d3a41c75e0337c3" host="localhost" Jan 28 01:53:12.233334 containerd[1609]: 2026-01-28 01:53:11.832 [INFO][5806] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:53:12.233334 containerd[1609]: 2026-01-28 01:53:11.832 [INFO][5806] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="497187648e1a4623a83de8f8cb8da263e2a60280d4ac34016d3a41c75e0337c3" HandleID="k8s-pod-network.497187648e1a4623a83de8f8cb8da263e2a60280d4ac34016d3a41c75e0337c3" Workload="localhost-k8s-calico--apiserver--654b4ddbfd--mgclm-eth0" Jan 28 01:53:12.235334 containerd[1609]: 2026-01-28 01:53:11.856 [INFO][5777] cni-plugin/k8s.go 418: Populated endpoint ContainerID="497187648e1a4623a83de8f8cb8da263e2a60280d4ac34016d3a41c75e0337c3" Namespace="calico-apiserver" Pod="calico-apiserver-654b4ddbfd-mgclm" WorkloadEndpoint="localhost-k8s-calico--apiserver--654b4ddbfd--mgclm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--654b4ddbfd--mgclm-eth0", GenerateName:"calico-apiserver-654b4ddbfd-", Namespace:"calico-apiserver", SelfLink:"", UID:"3ef171ed-8146-4d6a-9063-eb31677aa1d4", ResourceVersion:"1262", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 49, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"654b4ddbfd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-654b4ddbfd-mgclm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali85d0e4f0a50", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:53:12.235334 containerd[1609]: 2026-01-28 01:53:11.857 [INFO][5777] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="497187648e1a4623a83de8f8cb8da263e2a60280d4ac34016d3a41c75e0337c3" Namespace="calico-apiserver" Pod="calico-apiserver-654b4ddbfd-mgclm" WorkloadEndpoint="localhost-k8s-calico--apiserver--654b4ddbfd--mgclm-eth0" Jan 28 01:53:12.235334 containerd[1609]: 2026-01-28 01:53:11.857 [INFO][5777] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali85d0e4f0a50 ContainerID="497187648e1a4623a83de8f8cb8da263e2a60280d4ac34016d3a41c75e0337c3" Namespace="calico-apiserver" Pod="calico-apiserver-654b4ddbfd-mgclm" WorkloadEndpoint="localhost-k8s-calico--apiserver--654b4ddbfd--mgclm-eth0" Jan 28 01:53:12.235334 containerd[1609]: 2026-01-28 01:53:11.886 [INFO][5777] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="497187648e1a4623a83de8f8cb8da263e2a60280d4ac34016d3a41c75e0337c3" Namespace="calico-apiserver" Pod="calico-apiserver-654b4ddbfd-mgclm" WorkloadEndpoint="localhost-k8s-calico--apiserver--654b4ddbfd--mgclm-eth0" Jan 28 01:53:12.235334 containerd[1609]: 2026-01-28 01:53:11.996 [INFO][5777] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="497187648e1a4623a83de8f8cb8da263e2a60280d4ac34016d3a41c75e0337c3" Namespace="calico-apiserver" Pod="calico-apiserver-654b4ddbfd-mgclm" WorkloadEndpoint="localhost-k8s-calico--apiserver--654b4ddbfd--mgclm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--654b4ddbfd--mgclm-eth0", GenerateName:"calico-apiserver-654b4ddbfd-", Namespace:"calico-apiserver", SelfLink:"", UID:"3ef171ed-8146-4d6a-9063-eb31677aa1d4", ResourceVersion:"1262", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 49, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"654b4ddbfd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"497187648e1a4623a83de8f8cb8da263e2a60280d4ac34016d3a41c75e0337c3", Pod:"calico-apiserver-654b4ddbfd-mgclm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali85d0e4f0a50", MAC:"f6:cf:80:0d:61:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:53:12.235334 containerd[1609]: 2026-01-28 01:53:12.163 [INFO][5777] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="497187648e1a4623a83de8f8cb8da263e2a60280d4ac34016d3a41c75e0337c3" Namespace="calico-apiserver" Pod="calico-apiserver-654b4ddbfd-mgclm" WorkloadEndpoint="localhost-k8s-calico--apiserver--654b4ddbfd--mgclm-eth0" Jan 28 01:53:12.514000 audit: BPF prog-id=221 op=LOAD Jan 28 01:53:12.523000 audit: BPF prog-id=222 op=LOAD Jan 28 01:53:12.523000 audit[5887]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=5874 pid=5887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:12.523000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132663434633735373563646633313437303336396166313638323338 Jan 28 01:53:12.524000 audit: BPF prog-id=222 op=UNLOAD Jan 28 01:53:12.524000 audit[5887]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5874 pid=5887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:12.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132663434633735373563646633313437303336396166313638323338 Jan 28 01:53:12.524000 audit: BPF prog-id=223 op=LOAD Jan 28 01:53:12.524000 audit[5887]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=5874 pid=5887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:12.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132663434633735373563646633313437303336396166313638323338 Jan 28 01:53:12.528000 audit: BPF prog-id=224 op=LOAD Jan 28 01:53:12.528000 audit[5887]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=5874 pid=5887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:12.528000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132663434633735373563646633313437303336396166313638323338 Jan 28 01:53:12.528000 audit: BPF prog-id=224 op=UNLOAD Jan 28 01:53:12.528000 audit[5887]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5874 pid=5887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:12.528000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132663434633735373563646633313437303336396166313638323338 Jan 28 01:53:12.528000 audit: BPF prog-id=223 op=UNLOAD Jan 28 01:53:12.528000 audit[5887]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5874 pid=5887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:12.528000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132663434633735373563646633313437303336396166313638323338 Jan 28 01:53:12.528000 audit: BPF prog-id=225 op=LOAD Jan 28 01:53:12.528000 audit[5887]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=5874 pid=5887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:12.528000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132663434633735373563646633313437303336396166313638323338 Jan 28 01:53:12.569889 systemd-resolved[1290]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:53:12.628916 containerd[1609]: time="2026-01-28T01:53:12.628804561Z" level=info msg="connecting to shim 497187648e1a4623a83de8f8cb8da263e2a60280d4ac34016d3a41c75e0337c3" address="unix:///run/containerd/s/44a9f6f3464ad7b8042cabd9cf75b1a5bd7253f0f5af1c54ea11418c141dfeb5" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:53:12.898000 audit[5909]: USER_ACCT pid=5909 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:12.928164 sshd[5909]: Accepted publickey for core from 10.0.0.1 port 38452 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:53:12.932000 audit[5909]: CRED_ACQ pid=5909 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:12.932000 audit[5909]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe45bf0100 a2=3 a3=0 items=0 ppid=1 pid=5909 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:12.932000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:53:13.001250 sshd-session[5909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:53:13.164420 systemd[1]: Started cri-containerd-497187648e1a4623a83de8f8cb8da263e2a60280d4ac34016d3a41c75e0337c3.scope - libcontainer container 497187648e1a4623a83de8f8cb8da263e2a60280d4ac34016d3a41c75e0337c3. Jan 28 01:53:13.222520 systemd-logind[1586]: New session 14 of user core. Jan 28 01:53:13.282356 kubelet[2967]: E0128 01:53:13.280917 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:53:13.291862 containerd[1609]: time="2026-01-28T01:53:13.288179818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h25bw,Uid:0da3871e-a4b1-42ab-9e6b-d2183806355d,Namespace:kube-system,Attempt:0,}" Jan 28 01:53:13.307459 systemd-networkd[1515]: cali85d0e4f0a50: Gained IPv6LL Jan 28 01:53:13.454004 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 28 01:53:13.536000 audit[5909]: USER_START pid=5909 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:13.608200 containerd[1609]: time="2026-01-28T01:53:13.592508446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654b4ddbfd-mbn64,Uid:ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a2f44c7575cdf31470369af168238ff90920b90e10f426a2be08efde461a1284\"" Jan 28 01:53:13.614000 audit[5997]: CRED_ACQ pid=5997 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:13.686026 containerd[1609]: time="2026-01-28T01:53:13.682208434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:53:13.713000 audit: BPF prog-id=226 op=LOAD Jan 28 01:53:13.724000 audit: BPF prog-id=227 op=LOAD Jan 28 01:53:13.724000 audit[5967]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5955 pid=5967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:13.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439373138373634386531613436323361383364653866386362386461 Jan 28 01:53:13.724000 audit: BPF prog-id=227 op=UNLOAD Jan 28 01:53:13.724000 audit[5967]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5955 pid=5967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:13.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439373138373634386531613436323361383364653866386362386461 Jan 28 01:53:13.745000 audit: BPF prog-id=228 op=LOAD Jan 28 01:53:13.745000 audit[5967]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5955 pid=5967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:13.745000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439373138373634386531613436323361383364653866386362386461 Jan 28 01:53:13.756000 audit: BPF prog-id=229 op=LOAD Jan 28 01:53:13.756000 audit[5967]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5955 pid=5967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:13.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439373138373634386531613436323361383364653866386362386461 Jan 28 01:53:13.784000 audit: BPF prog-id=229 op=UNLOAD Jan 28 01:53:13.784000 audit[5967]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5955 pid=5967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:13.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439373138373634386531613436323361383364653866386362386461 Jan 28 01:53:13.784000 audit: BPF prog-id=228 op=UNLOAD Jan 28 01:53:13.784000 audit[5967]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5955 pid=5967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:13.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439373138373634386531613436323361383364653866386362386461 Jan 28 01:53:13.784000 audit: BPF prog-id=230 op=LOAD Jan 28 01:53:13.784000 audit[5967]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5955 pid=5967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:13.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439373138373634386531613436323361383364653866386362386461 Jan 28 01:53:13.817877 systemd-networkd[1515]: vxlan.calico: Link UP Jan 28 01:53:13.818132 systemd-networkd[1515]: vxlan.calico: Gained carrier Jan 28 01:53:13.830570 systemd-resolved[1290]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:53:13.931909 systemd-networkd[1515]: calied948a2123c: Link UP Jan 28 01:53:13.969803 systemd-networkd[1515]: calied948a2123c: Gained carrier Jan 28 01:53:14.157173 containerd[1609]: time="2026-01-28T01:53:14.157120094Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:53:14.203000 audit: BPF prog-id=231 op=LOAD Jan 28 01:53:14.203000 audit[6030]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeeb7c2f60 a2=98 a3=0 items=0 ppid=5445 pid=6030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:14.203000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:53:14.203000 audit: BPF prog-id=231 op=UNLOAD Jan 28 01:53:14.203000 audit[6030]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffeeb7c2f30 a3=0 items=0 ppid=5445 pid=6030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:14.203000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:53:14.227000 audit: BPF prog-id=232 op=LOAD Jan 28 01:53:14.227000 audit[6030]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeeb7c2d70 a2=94 a3=54428f items=0 ppid=5445 pid=6030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:14.227000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:53:14.227000 audit: BPF prog-id=232 op=UNLOAD Jan 28 01:53:14.227000 audit[6030]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffeeb7c2d70 a2=94 a3=54428f items=0 ppid=5445 pid=6030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:14.227000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:53:14.227000 audit: BPF prog-id=233 op=LOAD Jan 28 01:53:14.227000 audit[6030]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeeb7c2da0 a2=94 a3=2 items=0 ppid=5445 pid=6030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:14.227000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:53:14.239041 containerd[1609]: time="2026-01-28T01:53:14.234059178Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:53:14.239041 containerd[1609]: time="2026-01-28T01:53:14.234526876Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:53:14.234000 audit: BPF prog-id=233 op=UNLOAD Jan 28 01:53:14.234000 audit[6030]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffeeb7c2da0 a2=0 a3=2 items=0 ppid=5445 pid=6030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:14.234000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:53:14.234000 audit: BPF prog-id=234 op=LOAD Jan 28 01:53:14.234000 audit[6030]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeeb7c2b50 a2=94 a3=4 items=0 ppid=5445 pid=6030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:14.234000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:53:14.234000 audit: BPF prog-id=234 op=UNLOAD Jan 28 01:53:14.234000 audit[6030]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffeeb7c2b50 a2=94 a3=4 items=0 ppid=5445 pid=6030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:14.234000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:53:14.234000 audit: BPF prog-id=235 op=LOAD Jan 28 01:53:14.234000 audit[6030]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeeb7c2c50 a2=94 a3=7ffeeb7c2dd0 items=0 ppid=5445 pid=6030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:14.234000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:53:14.234000 audit: BPF prog-id=235 op=UNLOAD Jan 28 01:53:14.234000 audit[6030]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffeeb7c2c50 a2=0 a3=7ffeeb7c2dd0 items=0 ppid=5445 pid=6030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:14.234000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:53:14.250379 containerd[1609]: 2026-01-28 01:53:11.422 [INFO][5814] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--849fc56f8--v9sqx-eth0 calico-kube-controllers-849fc56f8- calico-system 67371941-5272-4e0e-84ef-cf7de9065a57 1259 0 2026-01-28 01:51:11 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:849fc56f8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-849fc56f8-v9sqx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calied948a2123c [] [] }} ContainerID="781d351f3b53b12c56e2c941fe38b7f973672276d0a816bf192907bac63936e9" Namespace="calico-system" Pod="calico-kube-controllers-849fc56f8-v9sqx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--849fc56f8--v9sqx-" Jan 28 01:53:14.250379 containerd[1609]: 2026-01-28 01:53:11.425 [INFO][5814] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="781d351f3b53b12c56e2c941fe38b7f973672276d0a816bf192907bac63936e9" Namespace="calico-system" Pod="calico-kube-controllers-849fc56f8-v9sqx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--849fc56f8--v9sqx-eth0" Jan 28 01:53:14.250379 containerd[1609]: 2026-01-28 01:53:11.886 [INFO][5861] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="781d351f3b53b12c56e2c941fe38b7f973672276d0a816bf192907bac63936e9" HandleID="k8s-pod-network.781d351f3b53b12c56e2c941fe38b7f973672276d0a816bf192907bac63936e9" Workload="localhost-k8s-calico--kube--controllers--849fc56f8--v9sqx-eth0" Jan 28 01:53:14.250379 containerd[1609]: 2026-01-28 01:53:11.887 [INFO][5861] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="781d351f3b53b12c56e2c941fe38b7f973672276d0a816bf192907bac63936e9" HandleID="k8s-pod-network.781d351f3b53b12c56e2c941fe38b7f973672276d0a816bf192907bac63936e9" Workload="localhost-k8s-calico--kube--controllers--849fc56f8--v9sqx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000524460), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-849fc56f8-v9sqx", "timestamp":"2026-01-28 01:53:11.886813838 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:53:14.250379 containerd[1609]: 2026-01-28 01:53:11.887 [INFO][5861] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:53:14.250379 containerd[1609]: 2026-01-28 01:53:11.887 [INFO][5861] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:53:14.250379 containerd[1609]: 2026-01-28 01:53:11.887 [INFO][5861] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:53:14.250379 containerd[1609]: 2026-01-28 01:53:12.237 [INFO][5861] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.781d351f3b53b12c56e2c941fe38b7f973672276d0a816bf192907bac63936e9" host="localhost" Jan 28 01:53:14.250379 containerd[1609]: 2026-01-28 01:53:12.540 [INFO][5861] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:53:14.250379 containerd[1609]: 2026-01-28 01:53:12.617 [INFO][5861] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:53:14.250379 containerd[1609]: 2026-01-28 01:53:12.656 [INFO][5861] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:53:14.250379 containerd[1609]: 2026-01-28 01:53:13.002 [INFO][5861] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:53:14.250379 containerd[1609]: 2026-01-28 01:53:13.002 [INFO][5861] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.781d351f3b53b12c56e2c941fe38b7f973672276d0a816bf192907bac63936e9" host="localhost" Jan 28 01:53:14.250379 containerd[1609]: 2026-01-28 01:53:13.157 [INFO][5861] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.781d351f3b53b12c56e2c941fe38b7f973672276d0a816bf192907bac63936e9 Jan 28 01:53:14.250379 containerd[1609]: 2026-01-28 01:53:13.451 [INFO][5861] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.781d351f3b53b12c56e2c941fe38b7f973672276d0a816bf192907bac63936e9" host="localhost" Jan 28 01:53:14.250379 containerd[1609]: 2026-01-28 01:53:13.705 [INFO][5861] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.781d351f3b53b12c56e2c941fe38b7f973672276d0a816bf192907bac63936e9" host="localhost" Jan 28 01:53:14.250379 containerd[1609]: 2026-01-28 01:53:13.707 [INFO][5861] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.781d351f3b53b12c56e2c941fe38b7f973672276d0a816bf192907bac63936e9" host="localhost" Jan 28 01:53:14.250379 containerd[1609]: 2026-01-28 01:53:13.711 [INFO][5861] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:53:14.250379 containerd[1609]: 2026-01-28 01:53:13.712 [INFO][5861] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="781d351f3b53b12c56e2c941fe38b7f973672276d0a816bf192907bac63936e9" HandleID="k8s-pod-network.781d351f3b53b12c56e2c941fe38b7f973672276d0a816bf192907bac63936e9" Workload="localhost-k8s-calico--kube--controllers--849fc56f8--v9sqx-eth0" Jan 28 01:53:14.256653 containerd[1609]: 2026-01-28 01:53:13.727 [INFO][5814] cni-plugin/k8s.go 418: Populated endpoint ContainerID="781d351f3b53b12c56e2c941fe38b7f973672276d0a816bf192907bac63936e9" Namespace="calico-system" Pod="calico-kube-controllers-849fc56f8-v9sqx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--849fc56f8--v9sqx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--849fc56f8--v9sqx-eth0", GenerateName:"calico-kube-controllers-849fc56f8-", Namespace:"calico-system", SelfLink:"", UID:"67371941-5272-4e0e-84ef-cf7de9065a57", ResourceVersion:"1259", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 51, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"849fc56f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-849fc56f8-v9sqx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calied948a2123c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:53:14.256653 containerd[1609]: 2026-01-28 01:53:13.728 [INFO][5814] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="781d351f3b53b12c56e2c941fe38b7f973672276d0a816bf192907bac63936e9" Namespace="calico-system" Pod="calico-kube-controllers-849fc56f8-v9sqx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--849fc56f8--v9sqx-eth0" Jan 28 01:53:14.256653 containerd[1609]: 2026-01-28 01:53:13.728 [INFO][5814] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied948a2123c ContainerID="781d351f3b53b12c56e2c941fe38b7f973672276d0a816bf192907bac63936e9" Namespace="calico-system" Pod="calico-kube-controllers-849fc56f8-v9sqx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--849fc56f8--v9sqx-eth0" Jan 28 01:53:14.256653 containerd[1609]: 2026-01-28 01:53:13.926 [INFO][5814] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="781d351f3b53b12c56e2c941fe38b7f973672276d0a816bf192907bac63936e9" Namespace="calico-system" Pod="calico-kube-controllers-849fc56f8-v9sqx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--849fc56f8--v9sqx-eth0" Jan 28 01:53:14.256653 containerd[1609]: 2026-01-28 01:53:13.927 [INFO][5814] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="781d351f3b53b12c56e2c941fe38b7f973672276d0a816bf192907bac63936e9" Namespace="calico-system" Pod="calico-kube-controllers-849fc56f8-v9sqx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--849fc56f8--v9sqx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--849fc56f8--v9sqx-eth0", GenerateName:"calico-kube-controllers-849fc56f8-", Namespace:"calico-system", SelfLink:"", UID:"67371941-5272-4e0e-84ef-cf7de9065a57", ResourceVersion:"1259", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 51, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"849fc56f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"781d351f3b53b12c56e2c941fe38b7f973672276d0a816bf192907bac63936e9", Pod:"calico-kube-controllers-849fc56f8-v9sqx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calied948a2123c", MAC:"c6:95:ac:43:a3:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:53:14.256653 containerd[1609]: 2026-01-28 01:53:14.111 [INFO][5814] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="781d351f3b53b12c56e2c941fe38b7f973672276d0a816bf192907bac63936e9" Namespace="calico-system" Pod="calico-kube-controllers-849fc56f8-v9sqx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--849fc56f8--v9sqx-eth0" Jan 28 01:53:14.264159 kubelet[2967]: E0128 01:53:14.258886 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:53:14.264159 kubelet[2967]: E0128 01:53:14.258944 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:53:14.264159 kubelet[2967]: E0128 01:53:14.259361 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jjkwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-654b4ddbfd-mbn64_calico-apiserver(ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:53:14.265288 kubelet[2967]: E0128 01:53:14.265247 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" podUID="ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9" Jan 28 01:53:14.322000 audit: BPF prog-id=236 op=LOAD Jan 28 01:53:14.322000 audit[6030]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeeb7c2380 a2=94 a3=2 items=0 ppid=5445 pid=6030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:14.322000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:53:14.322000 audit: BPF prog-id=236 op=UNLOAD Jan 28 01:53:14.322000 audit[6030]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffeeb7c2380 a2=0 a3=2 items=0 ppid=5445 pid=6030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:14.322000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:53:14.322000 audit: BPF prog-id=237 op=LOAD Jan 28 01:53:14.322000 audit[6030]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeeb7c2480 a2=94 a3=30 items=0 ppid=5445 pid=6030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:14.322000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:53:14.664312 kubelet[2967]: E0128 01:53:14.656252 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" podUID="ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9" Jan 28 01:53:15.186636 containerd[1609]: time="2026-01-28T01:53:15.165522046Z" level=error msg="get state for 497187648e1a4623a83de8f8cb8da263e2a60280d4ac34016d3a41c75e0337c3" error="context deadline exceeded" Jan 28 01:53:15.186636 containerd[1609]: time="2026-01-28T01:53:15.165569163Z" level=warning msg="unknown status" status=0 Jan 28 01:53:15.201000 audit: BPF prog-id=238 op=LOAD Jan 28 01:53:15.201000 audit[6054]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffc04785a0 a2=98 a3=0 items=0 ppid=5445 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:15.201000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:53:15.201000 audit: BPF prog-id=238 op=UNLOAD Jan 28 01:53:15.201000 audit[6054]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffc0478570 a3=0 items=0 ppid=5445 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:15.201000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:53:15.201000 audit: BPF prog-id=239 op=LOAD Jan 28 01:53:15.201000 audit[6054]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffc0478390 a2=94 a3=54428f items=0 ppid=5445 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:15.201000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:53:15.201000 audit: BPF prog-id=239 op=UNLOAD Jan 28 01:53:15.201000 audit[6054]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffc0478390 a2=94 a3=54428f items=0 ppid=5445 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:15.201000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:53:15.201000 audit: BPF prog-id=240 op=LOAD Jan 28 01:53:15.201000 audit[6054]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffc04783c0 a2=94 a3=2 items=0 ppid=5445 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:15.201000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:53:15.201000 audit: BPF prog-id=240 op=UNLOAD Jan 28 01:53:15.201000 audit[6054]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffc04783c0 a2=0 a3=2 items=0 ppid=5445 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:15.201000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:53:15.290034 systemd-networkd[1515]: vxlan.calico: Gained IPv6LL Jan 28 01:53:15.357958 systemd-networkd[1515]: calied948a2123c: Gained IPv6LL Jan 28 01:53:15.396095 containerd[1609]: time="2026-01-28T01:53:15.395817405Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 28 01:53:15.751844 kernel: kauditd_printk_skb: 165 callbacks suppressed Jan 28 01:53:15.752086 kernel: audit: type=1325 audit(1769565195.722:772): table=filter:127 family=2 entries=20 op=nft_register_rule pid=6074 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:53:15.722000 audit[6074]: NETFILTER_CFG table=filter:127 family=2 entries=20 op=nft_register_rule pid=6074 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:53:15.818287 kernel: audit: type=1300 audit(1769565195.722:772): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffce79dc790 a2=0 a3=7ffce79dc77c items=0 ppid=3078 pid=6074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:15.722000 audit[6074]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffce79dc790 a2=0 a3=7ffce79dc77c items=0 ppid=3078 pid=6074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:15.784328 sshd-session[5909]: pam_unix(sshd:session): session closed for user core Jan 28 01:53:15.819439 sshd[5997]: Connection closed by 10.0.0.1 port 38452 Jan 28 01:53:15.865289 kernel: audit: type=1327 audit(1769565195.722:772): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:53:15.722000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:53:15.882053 kubelet[2967]: E0128 01:53:15.877224 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" podUID="ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9" Jan 28 01:53:15.980109 kernel: audit: type=1106 audit(1769565195.813:773): pid=5909 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:15.813000 audit[5909]: USER_END pid=5909 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:15.980417 containerd[1609]: time="2026-01-28T01:53:15.965112876Z" level=info msg="connecting to shim 781d351f3b53b12c56e2c941fe38b7f973672276d0a816bf192907bac63936e9" address="unix:///run/containerd/s/bfc1555971703610e6772bb8849f46cdaab0feb4ff9a8eb84a04e352ff778ba9" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:53:15.813000 audit[5909]: CRED_DISP pid=5909 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:16.000321 systemd-networkd[1515]: calibf9968ce59c: Link UP Jan 28 01:53:16.009499 systemd-networkd[1515]: calibf9968ce59c: Gained carrier Jan 28 01:53:16.040811 kernel: audit: type=1104 audit(1769565195.813:774): pid=5909 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:15.831000 audit[6074]: NETFILTER_CFG table=nat:128 family=2 entries=14 op=nft_register_rule pid=6074 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:53:16.106312 kernel: audit: type=1325 audit(1769565195.831:775): table=nat:128 family=2 entries=14 op=nft_register_rule pid=6074 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:53:16.106521 kernel: audit: type=1300 audit(1769565195.831:775): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffce79dc790 a2=0 a3=0 items=0 ppid=3078 pid=6074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:15.831000 audit[6074]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffce79dc790 a2=0 a3=0 items=0 ppid=3078 pid=6074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:15.831000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:53:16.174199 kernel: audit: type=1327 audit(1769565195.831:775): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:53:16.181896 systemd[1]: sshd@12-10.0.0.85:22-10.0.0.1:38452.service: Deactivated successfully. Jan 28 01:53:16.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.85:22-10.0.0.1:38452 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:53:16.202034 systemd[1]: session-14.scope: Deactivated successfully. Jan 28 01:53:16.238942 systemd-logind[1586]: Session 14 logged out. Waiting for processes to exit. Jan 28 01:53:16.241026 kernel: audit: type=1131 audit(1769565196.178:776): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.85:22-10.0.0.1:38452 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:53:16.274971 systemd-logind[1586]: Removed session 14. Jan 28 01:53:16.493878 containerd[1609]: 2026-01-28 01:53:11.658 [INFO][5825] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--ms9md-eth0 csi-node-driver- calico-system d33e070d-1851-4242-98ee-97e68b203245 1113 0 2026-01-28 01:51:11 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-ms9md eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibf9968ce59c [] [] }} ContainerID="e678ed04901bb1e0782c158bf82ec6681f27ddb64ac555a894e260028106831b" Namespace="calico-system" Pod="csi-node-driver-ms9md" WorkloadEndpoint="localhost-k8s-csi--node--driver--ms9md-" Jan 28 01:53:16.493878 containerd[1609]: 2026-01-28 01:53:11.663 [INFO][5825] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e678ed04901bb1e0782c158bf82ec6681f27ddb64ac555a894e260028106831b" Namespace="calico-system" Pod="csi-node-driver-ms9md" WorkloadEndpoint="localhost-k8s-csi--node--driver--ms9md-eth0" Jan 28 01:53:16.493878 containerd[1609]: 2026-01-28 01:53:12.159 [INFO][5890] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e678ed04901bb1e0782c158bf82ec6681f27ddb64ac555a894e260028106831b" HandleID="k8s-pod-network.e678ed04901bb1e0782c158bf82ec6681f27ddb64ac555a894e260028106831b" Workload="localhost-k8s-csi--node--driver--ms9md-eth0" Jan 28 01:53:16.493878 containerd[1609]: 2026-01-28 01:53:12.159 [INFO][5890] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e678ed04901bb1e0782c158bf82ec6681f27ddb64ac555a894e260028106831b" HandleID="k8s-pod-network.e678ed04901bb1e0782c158bf82ec6681f27ddb64ac555a894e260028106831b" Workload="localhost-k8s-csi--node--driver--ms9md-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000b0f10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-ms9md", "timestamp":"2026-01-28 01:53:12.159267997 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:53:16.493878 containerd[1609]: 2026-01-28 01:53:12.159 [INFO][5890] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:53:16.493878 containerd[1609]: 2026-01-28 01:53:13.714 [INFO][5890] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:53:16.493878 containerd[1609]: 2026-01-28 01:53:13.721 [INFO][5890] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:53:16.493878 containerd[1609]: 2026-01-28 01:53:13.835 [INFO][5890] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e678ed04901bb1e0782c158bf82ec6681f27ddb64ac555a894e260028106831b" host="localhost" Jan 28 01:53:16.493878 containerd[1609]: 2026-01-28 01:53:14.004 [INFO][5890] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:53:16.493878 containerd[1609]: 2026-01-28 01:53:14.327 [INFO][5890] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:53:16.493878 containerd[1609]: 2026-01-28 01:53:14.397 [INFO][5890] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:53:16.493878 containerd[1609]: 2026-01-28 01:53:14.555 [INFO][5890] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:53:16.493878 containerd[1609]: 2026-01-28 01:53:14.591 [INFO][5890] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e678ed04901bb1e0782c158bf82ec6681f27ddb64ac555a894e260028106831b" host="localhost" Jan 28 01:53:16.493878 containerd[1609]: 2026-01-28 01:53:14.732 [INFO][5890] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e678ed04901bb1e0782c158bf82ec6681f27ddb64ac555a894e260028106831b Jan 28 01:53:16.493878 containerd[1609]: 2026-01-28 01:53:15.369 [INFO][5890] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e678ed04901bb1e0782c158bf82ec6681f27ddb64ac555a894e260028106831b" host="localhost" Jan 28 01:53:16.493878 containerd[1609]: 2026-01-28 01:53:15.572 [INFO][5890] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.e678ed04901bb1e0782c158bf82ec6681f27ddb64ac555a894e260028106831b" host="localhost" Jan 28 01:53:16.493878 containerd[1609]: 2026-01-28 01:53:15.572 [INFO][5890] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.e678ed04901bb1e0782c158bf82ec6681f27ddb64ac555a894e260028106831b" host="localhost" Jan 28 01:53:16.493878 containerd[1609]: 2026-01-28 01:53:15.572 [INFO][5890] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:53:16.493878 containerd[1609]: 2026-01-28 01:53:15.578 [INFO][5890] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="e678ed04901bb1e0782c158bf82ec6681f27ddb64ac555a894e260028106831b" HandleID="k8s-pod-network.e678ed04901bb1e0782c158bf82ec6681f27ddb64ac555a894e260028106831b" Workload="localhost-k8s-csi--node--driver--ms9md-eth0" Jan 28 01:53:16.501903 containerd[1609]: 2026-01-28 01:53:15.900 [INFO][5825] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e678ed04901bb1e0782c158bf82ec6681f27ddb64ac555a894e260028106831b" Namespace="calico-system" Pod="csi-node-driver-ms9md" WorkloadEndpoint="localhost-k8s-csi--node--driver--ms9md-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ms9md-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d33e070d-1851-4242-98ee-97e68b203245", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 51, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-ms9md", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibf9968ce59c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:53:16.501903 containerd[1609]: 2026-01-28 01:53:15.903 [INFO][5825] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="e678ed04901bb1e0782c158bf82ec6681f27ddb64ac555a894e260028106831b" Namespace="calico-system" Pod="csi-node-driver-ms9md" WorkloadEndpoint="localhost-k8s-csi--node--driver--ms9md-eth0" Jan 28 01:53:16.501903 containerd[1609]: 2026-01-28 01:53:15.905 [INFO][5825] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibf9968ce59c ContainerID="e678ed04901bb1e0782c158bf82ec6681f27ddb64ac555a894e260028106831b" Namespace="calico-system" Pod="csi-node-driver-ms9md" WorkloadEndpoint="localhost-k8s-csi--node--driver--ms9md-eth0" Jan 28 01:53:16.501903 containerd[1609]: 2026-01-28 01:53:16.028 [INFO][5825] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e678ed04901bb1e0782c158bf82ec6681f27ddb64ac555a894e260028106831b" Namespace="calico-system" Pod="csi-node-driver-ms9md" WorkloadEndpoint="localhost-k8s-csi--node--driver--ms9md-eth0" Jan 28 01:53:16.501903 containerd[1609]: 2026-01-28 01:53:16.136 [INFO][5825] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e678ed04901bb1e0782c158bf82ec6681f27ddb64ac555a894e260028106831b" Namespace="calico-system" Pod="csi-node-driver-ms9md" WorkloadEndpoint="localhost-k8s-csi--node--driver--ms9md-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ms9md-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d33e070d-1851-4242-98ee-97e68b203245", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 51, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e678ed04901bb1e0782c158bf82ec6681f27ddb64ac555a894e260028106831b", Pod:"csi-node-driver-ms9md", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibf9968ce59c", MAC:"ea:7b:f7:e7:52:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:53:16.501903 containerd[1609]: 2026-01-28 01:53:16.444 [INFO][5825] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e678ed04901bb1e0782c158bf82ec6681f27ddb64ac555a894e260028106831b" Namespace="calico-system" Pod="csi-node-driver-ms9md" WorkloadEndpoint="localhost-k8s-csi--node--driver--ms9md-eth0" Jan 28 01:53:16.543064 containerd[1609]: time="2026-01-28T01:53:16.532362744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654b4ddbfd-mgclm,Uid:3ef171ed-8146-4d6a-9063-eb31677aa1d4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"497187648e1a4623a83de8f8cb8da263e2a60280d4ac34016d3a41c75e0337c3\"" Jan 28 01:53:16.625052 containerd[1609]: time="2026-01-28T01:53:16.615069810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:53:17.016572 containerd[1609]: time="2026-01-28T01:53:17.004992601Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:53:17.040106 systemd[1]: Started cri-containerd-781d351f3b53b12c56e2c941fe38b7f973672276d0a816bf192907bac63936e9.scope - libcontainer container 781d351f3b53b12c56e2c941fe38b7f973672276d0a816bf192907bac63936e9. Jan 28 01:53:17.079456 containerd[1609]: time="2026-01-28T01:53:17.063960950Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:53:17.079456 containerd[1609]: time="2026-01-28T01:53:17.064092323Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:53:17.079456 containerd[1609]: time="2026-01-28T01:53:17.076131479Z" level=info msg="connecting to shim e678ed04901bb1e0782c158bf82ec6681f27ddb64ac555a894e260028106831b" address="unix:///run/containerd/s/bd15633e9fafaf6aef8432f105745ace2183b48c19acb8f535eba00bbf21b2ee" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:53:17.079758 kubelet[2967]: E0128 01:53:17.078825 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:53:17.079758 kubelet[2967]: E0128 01:53:17.078873 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:53:17.079758 kubelet[2967]: E0128 01:53:17.079026 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2rq4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-654b4ddbfd-mgclm_calico-apiserver(3ef171ed-8146-4d6a-9063-eb31677aa1d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:53:17.081226 kubelet[2967]: E0128 01:53:17.081003 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" podUID="3ef171ed-8146-4d6a-9063-eb31677aa1d4" Jan 28 01:53:17.158297 systemd-networkd[1515]: calibf9968ce59c: Gained IPv6LL Jan 28 01:53:17.580119 kernel: audit: type=1334 audit(1769565197.556:777): prog-id=241 op=LOAD Jan 28 01:53:17.556000 audit: BPF prog-id=241 op=LOAD Jan 28 01:53:17.556000 audit[6054]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffc0478280 a2=94 a3=1 items=0 ppid=5445 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:17.556000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:53:17.557000 audit: BPF prog-id=241 op=UNLOAD Jan 28 01:53:17.557000 audit[6054]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffc0478280 a2=94 a3=1 items=0 ppid=5445 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:17.557000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:53:17.630000 audit: BPF prog-id=242 op=LOAD Jan 28 01:53:17.635000 audit: BPF prog-id=243 op=LOAD Jan 28 01:53:17.635000 audit[6098]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=6079 pid=6098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:17.635000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738316433353166336235336231326335366532633934316665333862 Jan 28 01:53:17.635000 audit: BPF prog-id=243 op=UNLOAD Jan 28 01:53:17.635000 audit[6098]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6079 pid=6098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:17.635000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738316433353166336235336231326335366532633934316665333862 Jan 28 01:53:17.635000 audit: BPF prog-id=244 op=LOAD Jan 28 01:53:17.635000 audit[6098]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=6079 pid=6098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:17.635000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738316433353166336235336231326335366532633934316665333862 Jan 28 01:53:17.635000 audit: BPF prog-id=245 op=LOAD Jan 28 01:53:17.635000 audit[6098]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=6079 pid=6098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:17.635000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738316433353166336235336231326335366532633934316665333862 Jan 28 01:53:17.635000 audit: BPF prog-id=245 op=UNLOAD Jan 28 01:53:17.635000 audit[6098]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6079 pid=6098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:17.635000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738316433353166336235336231326335366532633934316665333862 Jan 28 01:53:17.635000 audit: BPF prog-id=244 op=UNLOAD Jan 28 01:53:17.635000 audit[6098]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6079 pid=6098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:17.635000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738316433353166336235336231326335366532633934316665333862 Jan 28 01:53:17.635000 audit: BPF prog-id=246 op=LOAD Jan 28 01:53:17.635000 audit[6098]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=6079 pid=6098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:17.635000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738316433353166336235336231326335366532633934316665333862 Jan 28 01:53:17.654755 systemd-resolved[1290]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:53:17.684000 audit: BPF prog-id=247 op=LOAD Jan 28 01:53:17.684000 audit[6054]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffc0478270 a2=94 a3=4 items=0 ppid=5445 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:17.684000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:53:17.686000 audit: BPF prog-id=247 op=UNLOAD Jan 28 01:53:17.686000 audit[6054]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fffc0478270 a2=0 a3=4 items=0 ppid=5445 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:17.686000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:53:17.687000 audit: BPF prog-id=248 op=LOAD Jan 28 01:53:17.687000 audit[6054]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffc04780d0 a2=94 a3=5 items=0 ppid=5445 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:17.687000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:53:17.688000 audit: BPF prog-id=248 op=UNLOAD Jan 28 01:53:17.688000 audit[6054]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fffc04780d0 a2=0 a3=5 items=0 ppid=5445 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:17.688000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:53:17.699067 systemd[1]: Started cri-containerd-e678ed04901bb1e0782c158bf82ec6681f27ddb64ac555a894e260028106831b.scope - libcontainer container e678ed04901bb1e0782c158bf82ec6681f27ddb64ac555a894e260028106831b. Jan 28 01:53:17.700000 audit: BPF prog-id=249 op=LOAD Jan 28 01:53:17.700000 audit[6054]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffc04782f0 a2=94 a3=6 items=0 ppid=5445 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:17.700000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:53:17.700000 audit: BPF prog-id=249 op=UNLOAD Jan 28 01:53:17.700000 audit[6054]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fffc04782f0 a2=0 a3=6 items=0 ppid=5445 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:17.700000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:53:17.704000 audit: BPF prog-id=250 op=LOAD Jan 28 01:53:17.704000 audit[6054]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffc0477aa0 a2=94 a3=88 items=0 ppid=5445 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:17.704000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:53:17.707000 audit: BPF prog-id=251 op=LOAD Jan 28 01:53:17.707000 audit[6054]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fffc0477920 a2=94 a3=2 items=0 ppid=5445 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:17.707000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:53:17.707000 audit: BPF prog-id=251 op=UNLOAD Jan 28 01:53:17.707000 audit[6054]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fffc0477950 a2=0 a3=7fffc0477a50 items=0 ppid=5445 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:17.707000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:53:17.710000 audit: BPF prog-id=250 op=UNLOAD Jan 28 01:53:17.710000 audit[6054]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=1539dd10 a2=0 a3=e1ffac42560167c0 items=0 ppid=5445 pid=6054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:17.710000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:53:17.798000 audit: BPF prog-id=237 op=UNLOAD Jan 28 01:53:17.798000 audit[5445]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c001068440 a2=0 a3=0 items=0 ppid=5425 pid=5445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:17.798000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 28 01:53:17.954543 kubelet[2967]: E0128 01:53:17.933872 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" podUID="3ef171ed-8146-4d6a-9063-eb31677aa1d4" Jan 28 01:53:18.027000 audit: BPF prog-id=252 op=LOAD Jan 28 01:53:18.027000 audit: BPF prog-id=253 op=LOAD Jan 28 01:53:18.027000 audit[6143]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=6124 pid=6143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:18.027000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536373865643034393031626231653037383263313538626638326563 Jan 28 01:53:18.027000 audit: BPF prog-id=253 op=UNLOAD Jan 28 01:53:18.027000 audit[6143]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6124 pid=6143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:18.027000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536373865643034393031626231653037383263313538626638326563 Jan 28 01:53:18.054000 audit: BPF prog-id=254 op=LOAD Jan 28 01:53:18.054000 audit[6143]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=6124 pid=6143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:18.054000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536373865643034393031626231653037383263313538626638326563 Jan 28 01:53:18.054000 audit: BPF prog-id=255 op=LOAD Jan 28 01:53:18.054000 audit[6143]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=6124 pid=6143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:18.054000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536373865643034393031626231653037383263313538626638326563 Jan 28 01:53:18.054000 audit: BPF prog-id=255 op=UNLOAD Jan 28 01:53:18.054000 audit[6143]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6124 pid=6143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:18.054000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536373865643034393031626231653037383263313538626638326563 Jan 28 01:53:18.067000 audit: BPF prog-id=254 op=UNLOAD Jan 28 01:53:18.067000 audit[6143]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6124 pid=6143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:18.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536373865643034393031626231653037383263313538626638326563 Jan 28 01:53:18.067000 audit: BPF prog-id=256 op=LOAD Jan 28 01:53:18.067000 audit[6143]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=6124 pid=6143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:18.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536373865643034393031626231653037383263313538626638326563 Jan 28 01:53:18.221428 systemd-resolved[1290]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:53:18.245559 systemd-networkd[1515]: cali70934715ed6: Link UP Jan 28 01:53:18.246066 systemd-networkd[1515]: cali70934715ed6: Gained carrier Jan 28 01:53:18.308227 containerd[1609]: time="2026-01-28T01:53:18.308173074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-849fc56f8-v9sqx,Uid:67371941-5272-4e0e-84ef-cf7de9065a57,Namespace:calico-system,Attempt:0,} returns sandbox id \"781d351f3b53b12c56e2c941fe38b7f973672276d0a816bf192907bac63936e9\"" Jan 28 01:53:18.382948 containerd[1609]: time="2026-01-28T01:53:18.382897292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:53:18.542883 containerd[1609]: 2026-01-28 01:53:14.415 [INFO][5996] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--h25bw-eth0 coredns-674b8bbfcf- kube-system 0da3871e-a4b1-42ab-9e6b-d2183806355d 1260 0 2026-01-28 01:48:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-h25bw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali70934715ed6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e07a46d6a49c43c2927ed954d5dd72f23cc4e56b36163320fec238f45b8537f9" Namespace="kube-system" Pod="coredns-674b8bbfcf-h25bw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h25bw-" Jan 28 01:53:18.542883 containerd[1609]: 2026-01-28 01:53:14.417 [INFO][5996] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e07a46d6a49c43c2927ed954d5dd72f23cc4e56b36163320fec238f45b8537f9" Namespace="kube-system" Pod="coredns-674b8bbfcf-h25bw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h25bw-eth0" Jan 28 01:53:18.542883 containerd[1609]: 2026-01-28 01:53:16.829 [INFO][6048] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e07a46d6a49c43c2927ed954d5dd72f23cc4e56b36163320fec238f45b8537f9" HandleID="k8s-pod-network.e07a46d6a49c43c2927ed954d5dd72f23cc4e56b36163320fec238f45b8537f9" Workload="localhost-k8s-coredns--674b8bbfcf--h25bw-eth0" Jan 28 01:53:18.542883 containerd[1609]: 2026-01-28 01:53:16.837 [INFO][6048] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e07a46d6a49c43c2927ed954d5dd72f23cc4e56b36163320fec238f45b8537f9" HandleID="k8s-pod-network.e07a46d6a49c43c2927ed954d5dd72f23cc4e56b36163320fec238f45b8537f9" Workload="localhost-k8s-coredns--674b8bbfcf--h25bw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003963e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-h25bw", "timestamp":"2026-01-28 01:53:16.829272584 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:53:18.542883 containerd[1609]: 2026-01-28 01:53:16.837 [INFO][6048] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:53:18.542883 containerd[1609]: 2026-01-28 01:53:16.837 [INFO][6048] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:53:18.542883 containerd[1609]: 2026-01-28 01:53:16.837 [INFO][6048] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:53:18.542883 containerd[1609]: 2026-01-28 01:53:17.115 [INFO][6048] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e07a46d6a49c43c2927ed954d5dd72f23cc4e56b36163320fec238f45b8537f9" host="localhost" Jan 28 01:53:18.542883 containerd[1609]: 2026-01-28 01:53:17.541 [INFO][6048] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:53:18.542883 containerd[1609]: 2026-01-28 01:53:17.658 [INFO][6048] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:53:18.542883 containerd[1609]: 2026-01-28 01:53:17.708 [INFO][6048] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:53:18.542883 containerd[1609]: 2026-01-28 01:53:17.766 [INFO][6048] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:53:18.542883 containerd[1609]: 2026-01-28 01:53:17.766 [INFO][6048] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e07a46d6a49c43c2927ed954d5dd72f23cc4e56b36163320fec238f45b8537f9" host="localhost" Jan 28 01:53:18.542883 containerd[1609]: 2026-01-28 01:53:17.784 [INFO][6048] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e07a46d6a49c43c2927ed954d5dd72f23cc4e56b36163320fec238f45b8537f9 Jan 28 01:53:18.542883 containerd[1609]: 2026-01-28 01:53:17.874 [INFO][6048] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e07a46d6a49c43c2927ed954d5dd72f23cc4e56b36163320fec238f45b8537f9" host="localhost" Jan 28 01:53:18.542883 containerd[1609]: 2026-01-28 01:53:18.071 [INFO][6048] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.e07a46d6a49c43c2927ed954d5dd72f23cc4e56b36163320fec238f45b8537f9" host="localhost" Jan 28 01:53:18.542883 containerd[1609]: 2026-01-28 01:53:18.071 [INFO][6048] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.e07a46d6a49c43c2927ed954d5dd72f23cc4e56b36163320fec238f45b8537f9" host="localhost" Jan 28 01:53:18.542883 containerd[1609]: 2026-01-28 01:53:18.071 [INFO][6048] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:53:18.542883 containerd[1609]: 2026-01-28 01:53:18.071 [INFO][6048] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="e07a46d6a49c43c2927ed954d5dd72f23cc4e56b36163320fec238f45b8537f9" HandleID="k8s-pod-network.e07a46d6a49c43c2927ed954d5dd72f23cc4e56b36163320fec238f45b8537f9" Workload="localhost-k8s-coredns--674b8bbfcf--h25bw-eth0" Jan 28 01:53:18.548260 containerd[1609]: 2026-01-28 01:53:18.229 [INFO][5996] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e07a46d6a49c43c2927ed954d5dd72f23cc4e56b36163320fec238f45b8537f9" Namespace="kube-system" Pod="coredns-674b8bbfcf-h25bw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h25bw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--h25bw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0da3871e-a4b1-42ab-9e6b-d2183806355d", ResourceVersion:"1260", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 48, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-h25bw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali70934715ed6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:53:18.548260 containerd[1609]: 2026-01-28 01:53:18.230 [INFO][5996] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="e07a46d6a49c43c2927ed954d5dd72f23cc4e56b36163320fec238f45b8537f9" Namespace="kube-system" Pod="coredns-674b8bbfcf-h25bw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h25bw-eth0" Jan 28 01:53:18.548260 containerd[1609]: 2026-01-28 01:53:18.231 [INFO][5996] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali70934715ed6 ContainerID="e07a46d6a49c43c2927ed954d5dd72f23cc4e56b36163320fec238f45b8537f9" Namespace="kube-system" Pod="coredns-674b8bbfcf-h25bw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h25bw-eth0" Jan 28 01:53:18.548260 containerd[1609]: 2026-01-28 01:53:18.267 [INFO][5996] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e07a46d6a49c43c2927ed954d5dd72f23cc4e56b36163320fec238f45b8537f9" Namespace="kube-system" Pod="coredns-674b8bbfcf-h25bw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h25bw-eth0" Jan 28 01:53:18.548260 containerd[1609]: 2026-01-28 01:53:18.274 [INFO][5996] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e07a46d6a49c43c2927ed954d5dd72f23cc4e56b36163320fec238f45b8537f9" Namespace="kube-system" Pod="coredns-674b8bbfcf-h25bw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h25bw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--h25bw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0da3871e-a4b1-42ab-9e6b-d2183806355d", ResourceVersion:"1260", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 48, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e07a46d6a49c43c2927ed954d5dd72f23cc4e56b36163320fec238f45b8537f9", Pod:"coredns-674b8bbfcf-h25bw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali70934715ed6", MAC:"5a:6b:65:58:49:db", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:53:18.548260 containerd[1609]: 2026-01-28 01:53:18.463 [INFO][5996] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e07a46d6a49c43c2927ed954d5dd72f23cc4e56b36163320fec238f45b8537f9" Namespace="kube-system" Pod="coredns-674b8bbfcf-h25bw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h25bw-eth0" Jan 28 01:53:18.598000 audit[6175]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=6175 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:53:18.598000 audit[6175]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc3ff4ea80 a2=0 a3=7ffc3ff4ea6c items=0 ppid=3078 pid=6175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:18.598000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:53:18.624547 containerd[1609]: time="2026-01-28T01:53:18.624488325Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:53:18.633000 audit[6175]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=6175 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:53:18.633000 audit[6175]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc3ff4ea80 a2=0 a3=0 items=0 ppid=3078 pid=6175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:18.633000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:53:18.684130 containerd[1609]: time="2026-01-28T01:53:18.683829089Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:53:18.684404 containerd[1609]: time="2026-01-28T01:53:18.684182695Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 01:53:18.686191 kubelet[2967]: E0128 01:53:18.685999 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:53:18.690198 kubelet[2967]: E0128 01:53:18.689146 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:53:18.714326 kubelet[2967]: E0128 01:53:18.708351 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4tgn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-849fc56f8-v9sqx_calico-system(67371941-5272-4e0e-84ef-cf7de9065a57): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:53:18.718110 kubelet[2967]: E0128 01:53:18.718058 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" podUID="67371941-5272-4e0e-84ef-cf7de9065a57" Jan 28 01:53:18.977824 containerd[1609]: time="2026-01-28T01:53:18.977777468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ms9md,Uid:d33e070d-1851-4242-98ee-97e68b203245,Namespace:calico-system,Attempt:0,} returns sandbox id \"e678ed04901bb1e0782c158bf82ec6681f27ddb64ac555a894e260028106831b\"" Jan 28 01:53:19.027851 kubelet[2967]: E0128 01:53:19.027793 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" podUID="67371941-5272-4e0e-84ef-cf7de9065a57" Jan 28 01:53:19.097336 containerd[1609]: time="2026-01-28T01:53:19.097004669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:53:19.110760 containerd[1609]: time="2026-01-28T01:53:19.110566872Z" level=info msg="connecting to shim e07a46d6a49c43c2927ed954d5dd72f23cc4e56b36163320fec238f45b8537f9" address="unix:///run/containerd/s/7e7ebf47fdc5a12d5efe4cced35787ec06f78e0a628fbc24b710b073b3af5054" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:53:19.362289 containerd[1609]: time="2026-01-28T01:53:19.359819773Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:53:19.369731 containerd[1609]: time="2026-01-28T01:53:19.369173154Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:53:19.369731 containerd[1609]: time="2026-01-28T01:53:19.369295792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 01:53:19.382034 kubelet[2967]: E0128 01:53:19.380427 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:53:19.382034 kubelet[2967]: E0128 01:53:19.380501 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:53:19.382034 kubelet[2967]: E0128 01:53:19.380881 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-882zm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ms9md_calico-system(d33e070d-1851-4242-98ee-97e68b203245): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:53:19.391850 containerd[1609]: time="2026-01-28T01:53:19.389808057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:53:19.424784 systemd[1]: Started cri-containerd-e07a46d6a49c43c2927ed954d5dd72f23cc4e56b36163320fec238f45b8537f9.scope - libcontainer container e07a46d6a49c43c2927ed954d5dd72f23cc4e56b36163320fec238f45b8537f9. Jan 28 01:53:19.494003 containerd[1609]: time="2026-01-28T01:53:19.493946436Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:53:19.509999 systemd-networkd[1515]: cali70934715ed6: Gained IPv6LL Jan 28 01:53:19.529901 containerd[1609]: time="2026-01-28T01:53:19.529831996Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:53:19.530536 containerd[1609]: time="2026-01-28T01:53:19.530289265Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 01:53:19.534849 kubelet[2967]: E0128 01:53:19.532427 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:53:19.535195 kubelet[2967]: E0128 01:53:19.535162 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:53:19.554480 kubelet[2967]: E0128 01:53:19.536625 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-882zm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ms9md_calico-system(d33e070d-1851-4242-98ee-97e68b203245): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:53:19.558249 kubelet[2967]: E0128 01:53:19.558015 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:53:19.596000 audit: BPF prog-id=257 op=LOAD Jan 28 01:53:19.607000 audit: BPF prog-id=258 op=LOAD Jan 28 01:53:19.607000 audit[6214]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000168238 a2=98 a3=0 items=0 ppid=6202 pid=6214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:19.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530376134366436613439633433633239323765643935346435646437 Jan 28 01:53:19.607000 audit: BPF prog-id=258 op=UNLOAD Jan 28 01:53:19.607000 audit[6214]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6202 pid=6214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:19.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530376134366436613439633433633239323765643935346435646437 Jan 28 01:53:19.607000 audit: BPF prog-id=259 op=LOAD Jan 28 01:53:19.607000 audit[6214]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000168488 a2=98 a3=0 items=0 ppid=6202 pid=6214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:19.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530376134366436613439633433633239323765643935346435646437 Jan 28 01:53:19.607000 audit: BPF prog-id=260 op=LOAD Jan 28 01:53:19.607000 audit[6214]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000168218 a2=98 a3=0 items=0 ppid=6202 pid=6214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:19.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530376134366436613439633433633239323765643935346435646437 Jan 28 01:53:19.607000 audit: BPF prog-id=260 op=UNLOAD Jan 28 01:53:19.607000 audit[6214]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6202 pid=6214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:19.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530376134366436613439633433633239323765643935346435646437 Jan 28 01:53:19.607000 audit: BPF prog-id=259 op=UNLOAD Jan 28 01:53:19.607000 audit[6214]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6202 pid=6214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:19.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530376134366436613439633433633239323765643935346435646437 Jan 28 01:53:19.607000 audit: BPF prog-id=261 op=LOAD Jan 28 01:53:19.607000 audit[6214]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001686e8 a2=98 a3=0 items=0 ppid=6202 pid=6214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:19.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530376134366436613439633433633239323765643935346435646437 Jan 28 01:53:19.627214 systemd-resolved[1290]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:53:19.870000 audit[6255]: NETFILTER_CFG table=mangle:131 family=2 entries=16 op=nft_register_chain pid=6255 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:53:19.870000 audit[6255]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff833a6f30 a2=0 a3=7fff833a6f1c items=0 ppid=5445 pid=6255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:19.870000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:53:19.872000 audit[6253]: NETFILTER_CFG table=nat:132 family=2 entries=15 op=nft_register_chain pid=6253 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:53:19.872000 audit[6253]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffd2b50ac50 a2=0 a3=7ffd2b50ac3c items=0 ppid=5445 pid=6253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:19.872000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:53:19.923097 containerd[1609]: time="2026-01-28T01:53:19.922523743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h25bw,Uid:0da3871e-a4b1-42ab-9e6b-d2183806355d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e07a46d6a49c43c2927ed954d5dd72f23cc4e56b36163320fec238f45b8537f9\"" Jan 28 01:53:19.924781 kubelet[2967]: E0128 01:53:19.924546 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:53:19.952000 audit[6256]: NETFILTER_CFG table=raw:133 family=2 entries=21 op=nft_register_chain pid=6256 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:53:19.952000 audit[6256]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffe41a8c8f0 a2=0 a3=7ffe41a8c8dc items=0 ppid=5445 pid=6256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:19.952000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:53:19.981955 containerd[1609]: time="2026-01-28T01:53:19.978028901Z" level=info msg="CreateContainer within sandbox \"e07a46d6a49c43c2927ed954d5dd72f23cc4e56b36163320fec238f45b8537f9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 01:53:20.052433 kubelet[2967]: E0128 01:53:20.049975 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" podUID="67371941-5272-4e0e-84ef-cf7de9065a57" Jan 28 01:53:20.055491 kubelet[2967]: E0128 01:53:20.055441 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:53:20.093012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2430988717.mount: Deactivated successfully. Jan 28 01:53:20.114083 containerd[1609]: time="2026-01-28T01:53:20.114021042Z" level=info msg="Container c2fa77a97b7ddbd6535c1e7ab7a25c5b812d4273c9f2335565e07e213161eca2: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:53:20.191480 containerd[1609]: time="2026-01-28T01:53:20.191252180Z" level=info msg="CreateContainer within sandbox \"e07a46d6a49c43c2927ed954d5dd72f23cc4e56b36163320fec238f45b8537f9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c2fa77a97b7ddbd6535c1e7ab7a25c5b812d4273c9f2335565e07e213161eca2\"" Jan 28 01:53:20.046000 audit[6264]: NETFILTER_CFG table=filter:134 family=2 entries=235 op=nft_register_chain pid=6264 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:53:20.046000 audit[6264]: SYSCALL arch=c000003e syscall=46 success=yes exit=139344 a0=3 a1=7ffce55a2740 a2=0 a3=7ffce55a272c items=0 ppid=5445 pid=6264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:20.046000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:53:20.207217 containerd[1609]: time="2026-01-28T01:53:20.194505140Z" level=info msg="StartContainer for \"c2fa77a97b7ddbd6535c1e7ab7a25c5b812d4273c9f2335565e07e213161eca2\"" Jan 28 01:53:20.207217 containerd[1609]: time="2026-01-28T01:53:20.196094550Z" level=info msg="connecting to shim c2fa77a97b7ddbd6535c1e7ab7a25c5b812d4273c9f2335565e07e213161eca2" address="unix:///run/containerd/s/7e7ebf47fdc5a12d5efe4cced35787ec06f78e0a628fbc24b710b073b3af5054" protocol=ttrpc version=3 Jan 28 01:53:20.381951 systemd[1]: Started cri-containerd-c2fa77a97b7ddbd6535c1e7ab7a25c5b812d4273c9f2335565e07e213161eca2.scope - libcontainer container c2fa77a97b7ddbd6535c1e7ab7a25c5b812d4273c9f2335565e07e213161eca2. Jan 28 01:53:20.571000 audit: BPF prog-id=262 op=LOAD Jan 28 01:53:20.572000 audit: BPF prog-id=263 op=LOAD Jan 28 01:53:20.572000 audit[6271]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=6202 pid=6271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:20.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332666137376139376237646462643635333563316537616237613235 Jan 28 01:53:20.572000 audit: BPF prog-id=263 op=UNLOAD Jan 28 01:53:20.572000 audit[6271]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6202 pid=6271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:20.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332666137376139376237646462643635333563316537616237613235 Jan 28 01:53:20.572000 audit: BPF prog-id=264 op=LOAD Jan 28 01:53:20.572000 audit[6271]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=6202 pid=6271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:20.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332666137376139376237646462643635333563316537616237613235 Jan 28 01:53:20.572000 audit: BPF prog-id=265 op=LOAD Jan 28 01:53:20.572000 audit[6271]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=6202 pid=6271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:20.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332666137376139376237646462643635333563316537616237613235 Jan 28 01:53:20.572000 audit: BPF prog-id=265 op=UNLOAD Jan 28 01:53:20.572000 audit[6271]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6202 pid=6271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:20.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332666137376139376237646462643635333563316537616237613235 Jan 28 01:53:20.572000 audit: BPF prog-id=264 op=UNLOAD Jan 28 01:53:20.572000 audit[6271]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6202 pid=6271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:20.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332666137376139376237646462643635333563316537616237613235 Jan 28 01:53:20.572000 audit: BPF prog-id=266 op=LOAD Jan 28 01:53:20.572000 audit[6271]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=6202 pid=6271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:20.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332666137376139376237646462643635333563316537616237613235 Jan 28 01:53:20.602000 audit[6294]: NETFILTER_CFG table=filter:135 family=2 entries=110 op=nft_register_chain pid=6294 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:53:20.602000 audit[6294]: SYSCALL arch=c000003e syscall=46 success=yes exit=58924 a0=3 a1=7ffc022e9710 a2=0 a3=7ffc022e96fc items=0 ppid=5445 pid=6294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:20.602000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:53:20.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.85:22-10.0.0.1:39438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:53:20.850123 kernel: kauditd_printk_skb: 147 callbacks suppressed Jan 28 01:53:20.850195 kernel: audit: type=1130 audit(1769565200.829:829): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.85:22-10.0.0.1:39438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:53:20.830071 systemd[1]: Started sshd@13-10.0.0.85:22-10.0.0.1:39438.service - OpenSSH per-connection server daemon (10.0.0.1:39438). Jan 28 01:53:20.925478 containerd[1609]: time="2026-01-28T01:53:20.925101152Z" level=info msg="StartContainer for \"c2fa77a97b7ddbd6535c1e7ab7a25c5b812d4273c9f2335565e07e213161eca2\" returns successfully" Jan 28 01:53:21.268130 containerd[1609]: time="2026-01-28T01:53:21.268006859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:53:21.273201 kubelet[2967]: E0128 01:53:21.271211 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:53:21.287648 kubelet[2967]: E0128 01:53:21.280954 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:53:21.492815 kubelet[2967]: E0128 01:53:21.489136 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:53:21.527254 containerd[1609]: time="2026-01-28T01:53:21.525114206Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:53:21.538876 containerd[1609]: time="2026-01-28T01:53:21.538802596Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:53:21.555062 containerd[1609]: time="2026-01-28T01:53:21.554999872Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 01:53:21.582941 kubelet[2967]: E0128 01:53:21.568542 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:53:21.582941 kubelet[2967]: E0128 01:53:21.568740 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:53:21.582941 kubelet[2967]: E0128 01:53:21.568881 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:11f2d6a54a3d467fbd60c4526f82d473,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2z9qq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7fb5cb5d8-9zmvs_calico-system(f9057416-92cd-485c-b269-9b046834d5f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:53:21.697366 containerd[1609]: time="2026-01-28T01:53:21.693202457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:53:21.968056 containerd[1609]: time="2026-01-28T01:53:21.955262787Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:53:21.978777 containerd[1609]: time="2026-01-28T01:53:21.976928635Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 01:53:21.978777 containerd[1609]: time="2026-01-28T01:53:21.977051402Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:53:22.007808 kubelet[2967]: E0128 01:53:21.999359 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:53:22.007808 kubelet[2967]: E0128 01:53:22.003877 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:53:22.007808 kubelet[2967]: E0128 01:53:22.004051 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2z9qq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7fb5cb5d8-9zmvs_calico-system(f9057416-92cd-485c-b269-9b046834d5f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:53:22.007808 kubelet[2967]: E0128 01:53:22.005382 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb5cb5d8-9zmvs" podUID="f9057416-92cd-485c-b269-9b046834d5f3" Jan 28 01:53:22.196000 audit[6301]: USER_ACCT pid=6301 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:22.202976 sshd[6301]: Accepted publickey for core from 10.0.0.1 port 39438 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:53:22.240938 sshd-session[6301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:53:22.284773 kernel: audit: type=1101 audit(1769565202.196:830): pid=6301 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:22.221000 audit[6301]: CRED_ACQ pid=6301 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:22.378267 kernel: audit: type=1103 audit(1769565202.221:831): pid=6301 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:22.378395 kernel: audit: type=1006 audit(1769565202.229:832): pid=6301 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 28 01:53:22.229000 audit[6301]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffd6a6e660 a2=3 a3=0 items=0 ppid=1 pid=6301 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:22.442907 systemd-logind[1586]: New session 15 of user core. Jan 28 01:53:22.527873 kernel: audit: type=1300 audit(1769565202.229:832): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffd6a6e660 a2=3 a3=0 items=0 ppid=1 pid=6301 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:22.528398 kernel: audit: type=1327 audit(1769565202.229:832): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:53:22.229000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:53:22.575163 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 28 01:53:22.637229 kubelet[2967]: E0128 01:53:22.636771 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:53:22.667000 audit[6301]: USER_START pid=6301 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:22.741810 kernel: audit: type=1105 audit(1769565202.667:833): pid=6301 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:22.741918 kernel: audit: type=1103 audit(1769565202.671:834): pid=6315 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:22.671000 audit[6315]: CRED_ACQ pid=6315 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:22.919266 kubelet[2967]: I0128 01:53:22.917425 2967 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-h25bw" podStartSLOduration=286.917402609 podStartE2EDuration="4m46.917402609s" podCreationTimestamp="2026-01-28 01:48:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:53:22.907785123 +0000 UTC m=+289.512697022" watchObservedRunningTime="2026-01-28 01:53:22.917402609 +0000 UTC m=+289.522314288" Jan 28 01:53:23.026378 kernel: audit: type=1325 audit(1769565202.958:835): table=filter:136 family=2 entries=17 op=nft_register_rule pid=6325 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:53:23.026633 kernel: audit: type=1300 audit(1769565202.958:835): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd3dc46140 a2=0 a3=7ffd3dc4612c items=0 ppid=3078 pid=6325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:22.958000 audit[6325]: NETFILTER_CFG table=filter:136 family=2 entries=17 op=nft_register_rule pid=6325 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:53:22.958000 audit[6325]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd3dc46140 a2=0 a3=7ffd3dc4612c items=0 ppid=3078 pid=6325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:22.958000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:53:23.143000 audit[6325]: NETFILTER_CFG table=nat:137 family=2 entries=35 op=nft_register_chain pid=6325 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:53:23.143000 audit[6325]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd3dc46140 a2=0 a3=7ffd3dc4612c items=0 ppid=3078 pid=6325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:23.143000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:53:23.239847 containerd[1609]: time="2026-01-28T01:53:23.238535757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:53:23.388318 containerd[1609]: time="2026-01-28T01:53:23.388252543Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:53:23.392960 containerd[1609]: time="2026-01-28T01:53:23.392908607Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:53:23.393382 containerd[1609]: time="2026-01-28T01:53:23.393146919Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 01:53:23.407260 kubelet[2967]: E0128 01:53:23.406309 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:53:23.407260 kubelet[2967]: E0128 01:53:23.406815 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:53:23.420780 kubelet[2967]: E0128 01:53:23.419948 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w48wh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nv2sz_calico-system(be8a6b52-634d-45dc-a492-0c042b64c6df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:53:23.451894 kubelet[2967]: E0128 01:53:23.433356 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nv2sz" podUID="be8a6b52-634d-45dc-a492-0c042b64c6df" Jan 28 01:53:23.577000 audit[6328]: NETFILTER_CFG table=filter:138 family=2 entries=14 op=nft_register_rule pid=6328 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:53:23.577000 audit[6328]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc15176070 a2=0 a3=7ffc1517605c items=0 ppid=3078 pid=6328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:23.577000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:53:23.585029 kubelet[2967]: E0128 01:53:23.574128 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:53:23.705773 sshd[6315]: Connection closed by 10.0.0.1 port 39438 Jan 28 01:53:23.706456 sshd-session[6301]: pam_unix(sshd:session): session closed for user core Jan 28 01:53:23.710000 audit[6301]: USER_END pid=6301 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:23.711000 audit[6301]: CRED_DISP pid=6301 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:23.715000 audit[6328]: NETFILTER_CFG table=nat:139 family=2 entries=56 op=nft_register_chain pid=6328 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:53:23.715000 audit[6328]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffc15176070 a2=0 a3=7ffc1517605c items=0 ppid=3078 pid=6328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:23.715000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:53:23.727316 systemd[1]: sshd@13-10.0.0.85:22-10.0.0.1:39438.service: Deactivated successfully. Jan 28 01:53:23.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.85:22-10.0.0.1:39438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:53:23.741137 systemd[1]: session-15.scope: Deactivated successfully. Jan 28 01:53:23.755852 systemd-logind[1586]: Session 15 logged out. Waiting for processes to exit. Jan 28 01:53:23.759660 systemd-logind[1586]: Removed session 15. Jan 28 01:53:25.190868 kubelet[2967]: E0128 01:53:25.186390 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:53:28.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.85:22-10.0.0.1:42450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:53:28.831973 kernel: kauditd_printk_skb: 13 callbacks suppressed Jan 28 01:53:28.832108 kernel: audit: type=1130 audit(1769565208.804:842): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.85:22-10.0.0.1:42450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:53:28.805797 systemd[1]: Started sshd@14-10.0.0.85:22-10.0.0.1:42450.service - OpenSSH per-connection server daemon (10.0.0.1:42450). Jan 28 01:53:29.215358 containerd[1609]: time="2026-01-28T01:53:29.214350883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:53:29.288831 sshd[6343]: Accepted publickey for core from 10.0.0.1 port 42450 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:53:29.287000 audit[6343]: USER_ACCT pid=6343 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:29.292618 sshd-session[6343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:53:29.313770 systemd-logind[1586]: New session 16 of user core. Jan 28 01:53:29.353227 kernel: audit: type=1101 audit(1769565209.287:843): pid=6343 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:29.290000 audit[6343]: CRED_ACQ pid=6343 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:29.393008 kernel: audit: type=1103 audit(1769565209.290:844): pid=6343 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:29.393926 kernel: audit: type=1006 audit(1769565209.290:845): pid=6343 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 28 01:53:29.411422 kernel: audit: type=1300 audit(1769565209.290:845): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0e018630 a2=3 a3=0 items=0 ppid=1 pid=6343 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:29.290000 audit[6343]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0e018630 a2=3 a3=0 items=0 ppid=1 pid=6343 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:29.415162 containerd[1609]: time="2026-01-28T01:53:29.414867715Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:53:29.425003 containerd[1609]: time="2026-01-28T01:53:29.424885765Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:53:29.426617 containerd[1609]: time="2026-01-28T01:53:29.425393448Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:53:29.429339 kubelet[2967]: E0128 01:53:29.428878 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:53:29.429339 kubelet[2967]: E0128 01:53:29.429013 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:53:29.430128 kubelet[2967]: E0128 01:53:29.429804 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2rq4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-654b4ddbfd-mgclm_calico-apiserver(3ef171ed-8146-4d6a-9063-eb31677aa1d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:53:29.435021 kubelet[2967]: E0128 01:53:29.433773 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" podUID="3ef171ed-8146-4d6a-9063-eb31677aa1d4" Jan 28 01:53:29.462941 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 28 01:53:29.290000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:53:29.484178 kernel: audit: type=1327 audit(1769565209.290:845): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:53:29.493000 audit[6343]: USER_START pid=6343 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:29.536915 kernel: audit: type=1105 audit(1769565209.493:846): pid=6343 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:29.529000 audit[6347]: CRED_ACQ pid=6347 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:29.566786 kernel: audit: type=1103 audit(1769565209.529:847): pid=6347 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:30.635867 sshd[6347]: Connection closed by 10.0.0.1 port 42450 Jan 28 01:53:30.639770 sshd-session[6343]: pam_unix(sshd:session): session closed for user core Jan 28 01:53:30.664000 audit[6343]: USER_END pid=6343 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:30.678400 systemd[1]: sshd@14-10.0.0.85:22-10.0.0.1:42450.service: Deactivated successfully. Jan 28 01:53:30.701907 systemd[1]: session-16.scope: Deactivated successfully. Jan 28 01:53:30.664000 audit[6343]: CRED_DISP pid=6343 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:30.704464 systemd-logind[1586]: Session 16 logged out. Waiting for processes to exit. Jan 28 01:53:30.730875 kernel: audit: type=1106 audit(1769565210.664:848): pid=6343 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:30.731031 kernel: audit: type=1104 audit(1769565210.664:849): pid=6343 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:30.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.85:22-10.0.0.1:42450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:53:30.749327 systemd-logind[1586]: Removed session 16. Jan 28 01:53:31.085315 kubelet[2967]: E0128 01:53:31.083800 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:53:31.234750 containerd[1609]: time="2026-01-28T01:53:31.231064479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:53:31.455360 containerd[1609]: time="2026-01-28T01:53:31.454997935Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:53:31.459786 containerd[1609]: time="2026-01-28T01:53:31.459750690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:53:31.460104 containerd[1609]: time="2026-01-28T01:53:31.459912671Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:53:31.463425 kubelet[2967]: E0128 01:53:31.463215 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:53:31.464105 kubelet[2967]: E0128 01:53:31.463760 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:53:31.470328 kubelet[2967]: E0128 01:53:31.470144 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jjkwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-654b4ddbfd-mbn64_calico-apiserver(ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:53:31.473038 kubelet[2967]: E0128 01:53:31.472317 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" podUID="ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9" Jan 28 01:53:34.199222 containerd[1609]: time="2026-01-28T01:53:34.197081273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:53:34.306151 containerd[1609]: time="2026-01-28T01:53:34.300654598Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:53:34.313542 containerd[1609]: time="2026-01-28T01:53:34.313155401Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:53:34.314112 containerd[1609]: time="2026-01-28T01:53:34.313387001Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 01:53:34.314196 kubelet[2967]: E0128 01:53:34.313946 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:53:34.314196 kubelet[2967]: E0128 01:53:34.314009 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:53:34.315041 kubelet[2967]: E0128 01:53:34.314183 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4tgn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-849fc56f8-v9sqx_calico-system(67371941-5272-4e0e-84ef-cf7de9065a57): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:53:34.316631 kubelet[2967]: E0128 01:53:34.316306 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" podUID="67371941-5272-4e0e-84ef-cf7de9065a57" Jan 28 01:53:35.745986 systemd[1]: Started sshd@15-10.0.0.85:22-10.0.0.1:56882.service - OpenSSH per-connection server daemon (10.0.0.1:56882). Jan 28 01:53:35.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.85:22-10.0.0.1:56882 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:53:35.770152 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:53:35.770275 kernel: audit: type=1130 audit(1769565215.742:851): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.85:22-10.0.0.1:56882 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:53:36.241326 kubelet[2967]: E0128 01:53:36.240934 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb5cb5d8-9zmvs" podUID="f9057416-92cd-485c-b269-9b046834d5f3" Jan 28 01:53:36.284125 containerd[1609]: time="2026-01-28T01:53:36.284022758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:53:36.408000 audit[6396]: USER_ACCT pid=6396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:36.412030 sshd[6396]: Accepted publickey for core from 10.0.0.1 port 56882 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:53:36.433161 sshd-session[6396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:53:36.465073 kernel: audit: type=1101 audit(1769565216.408:852): pid=6396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:36.465201 kernel: audit: type=1103 audit(1769565216.425:853): pid=6396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:36.425000 audit[6396]: CRED_ACQ pid=6396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:36.477397 systemd-logind[1586]: New session 17 of user core. Jan 28 01:53:36.500013 kernel: audit: type=1006 audit(1769565216.426:854): pid=6396 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 28 01:53:36.500602 containerd[1609]: time="2026-01-28T01:53:36.497265455Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:53:36.513557 containerd[1609]: time="2026-01-28T01:53:36.510356877Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:53:36.513557 containerd[1609]: time="2026-01-28T01:53:36.510540688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 01:53:36.520885 kubelet[2967]: E0128 01:53:36.520081 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:53:36.520885 kubelet[2967]: E0128 01:53:36.520160 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:53:36.520885 kubelet[2967]: E0128 01:53:36.520315 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-882zm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ms9md_calico-system(d33e070d-1851-4242-98ee-97e68b203245): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:53:36.537422 containerd[1609]: time="2026-01-28T01:53:36.537223651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:53:36.549256 kernel: audit: type=1300 audit(1769565216.426:854): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff97cbef40 a2=3 a3=0 items=0 ppid=1 pid=6396 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:36.426000 audit[6396]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff97cbef40 a2=3 a3=0 items=0 ppid=1 pid=6396 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:36.588943 kernel: audit: type=1327 audit(1769565216.426:854): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:53:36.426000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:53:36.611554 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 28 01:53:36.639000 audit[6396]: USER_START pid=6396 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:36.699740 kernel: audit: type=1105 audit(1769565216.639:855): pid=6396 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:36.696000 audit[6400]: CRED_ACQ pid=6400 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:36.716811 kernel: audit: type=1103 audit(1769565216.696:856): pid=6400 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:36.716873 containerd[1609]: time="2026-01-28T01:53:36.703530863Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:53:36.719252 containerd[1609]: time="2026-01-28T01:53:36.718846185Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:53:36.719252 containerd[1609]: time="2026-01-28T01:53:36.719052448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 01:53:36.725265 kubelet[2967]: E0128 01:53:36.725136 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:53:36.725265 kubelet[2967]: E0128 01:53:36.725230 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:53:36.732349 kubelet[2967]: E0128 01:53:36.732046 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-882zm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ms9md_calico-system(d33e070d-1851-4242-98ee-97e68b203245): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:53:36.740232 kubelet[2967]: E0128 01:53:36.740144 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:53:37.262237 sshd[6400]: Connection closed by 10.0.0.1 port 56882 Jan 28 01:53:37.262851 sshd-session[6396]: pam_unix(sshd:session): session closed for user core Jan 28 01:53:37.264000 audit[6396]: USER_END pid=6396 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:37.272624 systemd[1]: sshd@15-10.0.0.85:22-10.0.0.1:56882.service: Deactivated successfully. Jan 28 01:53:37.277293 systemd[1]: session-17.scope: Deactivated successfully. Jan 28 01:53:37.264000 audit[6396]: CRED_DISP pid=6396 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:37.288606 systemd-logind[1586]: Session 17 logged out. Waiting for processes to exit. Jan 28 01:53:37.293605 systemd-logind[1586]: Removed session 17. Jan 28 01:53:37.298104 kernel: audit: type=1106 audit(1769565217.264:857): pid=6396 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:37.298337 kernel: audit: type=1104 audit(1769565217.264:858): pid=6396 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:37.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.85:22-10.0.0.1:56882 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:53:38.112514 containerd[1609]: time="2026-01-28T01:53:38.112225107Z" level=info msg="container event discarded" container=093e89c286785bc3f94f60a7fe125e3cfeee525feb77d509ee36ea7535449aee type=CONTAINER_CREATED_EVENT Jan 28 01:53:38.112514 containerd[1609]: time="2026-01-28T01:53:38.112351131Z" level=info msg="container event discarded" container=093e89c286785bc3f94f60a7fe125e3cfeee525feb77d509ee36ea7535449aee type=CONTAINER_STARTED_EVENT Jan 28 01:53:38.209581 kubelet[2967]: E0128 01:53:38.201637 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nv2sz" podUID="be8a6b52-634d-45dc-a492-0c042b64c6df" Jan 28 01:53:38.511853 containerd[1609]: time="2026-01-28T01:53:38.511750789Z" level=info msg="container event discarded" container=b4d34c9f6d4c21cdb9cd5c5c0d8789a6e14f85ccd88751203c4be1a70d14f32c type=CONTAINER_CREATED_EVENT Jan 28 01:53:39.726654 containerd[1609]: time="2026-01-28T01:53:39.726179805Z" level=info msg="container event discarded" container=b4d34c9f6d4c21cdb9cd5c5c0d8789a6e14f85ccd88751203c4be1a70d14f32c type=CONTAINER_STARTED_EVENT Jan 28 01:53:40.202306 kubelet[2967]: E0128 01:53:40.201429 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:53:42.331247 systemd[1]: Started sshd@16-10.0.0.85:22-10.0.0.1:56898.service - OpenSSH per-connection server daemon (10.0.0.1:56898). Jan 28 01:53:42.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.85:22-10.0.0.1:56898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:53:42.360276 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:53:42.360440 kernel: audit: type=1130 audit(1769565222.337:860): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.85:22-10.0.0.1:56898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:53:42.733153 sshd[6426]: Accepted publickey for core from 10.0.0.1 port 56898 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:53:42.731000 audit[6426]: USER_ACCT pid=6426 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:42.759618 sshd-session[6426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:53:42.818273 kernel: audit: type=1101 audit(1769565222.731:861): pid=6426 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:42.818406 kernel: audit: type=1103 audit(1769565222.737:862): pid=6426 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:42.737000 audit[6426]: CRED_ACQ pid=6426 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:42.853155 systemd-logind[1586]: New session 18 of user core. Jan 28 01:53:42.878940 kernel: audit: type=1006 audit(1769565222.756:863): pid=6426 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 28 01:53:42.756000 audit[6426]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe78d6cce0 a2=3 a3=0 items=0 ppid=1 pid=6426 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:42.904254 kernel: audit: type=1300 audit(1769565222.756:863): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe78d6cce0 a2=3 a3=0 items=0 ppid=1 pid=6426 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:42.959119 containerd[1609]: time="2026-01-28T01:53:42.953413942Z" level=info msg="container event discarded" container=b9d1d348cf0795ea248711c7ef2848f460514adfe68ff32870ab0b42bd3087c2 type=CONTAINER_CREATED_EVENT Jan 28 01:53:42.959119 containerd[1609]: time="2026-01-28T01:53:42.958815254Z" level=info msg="container event discarded" container=b9d1d348cf0795ea248711c7ef2848f460514adfe68ff32870ab0b42bd3087c2 type=CONTAINER_STARTED_EVENT Jan 28 01:53:42.756000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:53:43.013886 kernel: audit: type=1327 audit(1769565222.756:863): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:53:43.018314 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 28 01:53:43.066000 audit[6426]: USER_START pid=6426 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:43.154281 kernel: audit: type=1105 audit(1769565223.066:864): pid=6426 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:43.154501 kernel: audit: type=1103 audit(1769565223.085:865): pid=6431 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:43.085000 audit[6431]: CRED_ACQ pid=6431 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:43.788852 sshd[6431]: Connection closed by 10.0.0.1 port 56898 Jan 28 01:53:43.794043 sshd-session[6426]: pam_unix(sshd:session): session closed for user core Jan 28 01:53:43.807000 audit[6426]: USER_END pid=6426 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:43.888831 kernel: audit: type=1106 audit(1769565223.807:866): pid=6426 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:43.888953 kernel: audit: type=1104 audit(1769565223.826:867): pid=6426 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:43.826000 audit[6426]: CRED_DISP pid=6426 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:43.893567 systemd[1]: sshd@16-10.0.0.85:22-10.0.0.1:56898.service: Deactivated successfully. Jan 28 01:53:43.911037 systemd[1]: session-18.scope: Deactivated successfully. Jan 28 01:53:43.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.85:22-10.0.0.1:56898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:53:43.963925 systemd-logind[1586]: Session 18 logged out. Waiting for processes to exit. Jan 28 01:53:43.966508 systemd-logind[1586]: Removed session 18. Jan 28 01:53:45.196753 kubelet[2967]: E0128 01:53:45.196212 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" podUID="3ef171ed-8146-4d6a-9063-eb31677aa1d4" Jan 28 01:53:45.198827 kubelet[2967]: E0128 01:53:45.197215 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:53:45.198827 kubelet[2967]: E0128 01:53:45.198145 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" podUID="ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9" Jan 28 01:53:46.201018 kubelet[2967]: E0128 01:53:46.200959 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" podUID="67371941-5272-4e0e-84ef-cf7de9065a57" Jan 28 01:53:47.212966 kubelet[2967]: E0128 01:53:47.203981 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:53:48.823983 systemd[1]: Started sshd@17-10.0.0.85:22-10.0.0.1:51766.service - OpenSSH per-connection server daemon (10.0.0.1:51766). Jan 28 01:53:48.833505 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:53:48.833612 kernel: audit: type=1130 audit(1769565228.825:869): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.85:22-10.0.0.1:51766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:53:48.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.85:22-10.0.0.1:51766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:53:49.195000 audit[6446]: USER_ACCT pid=6446 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:49.210087 sshd-session[6446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:53:49.223832 sshd[6446]: Accepted publickey for core from 10.0.0.1 port 51766 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:53:49.203000 audit[6446]: CRED_ACQ pid=6446 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:49.313820 systemd-logind[1586]: New session 19 of user core. Jan 28 01:53:49.335167 kernel: audit: type=1101 audit(1769565229.195:870): pid=6446 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:49.335309 kernel: audit: type=1103 audit(1769565229.203:871): pid=6446 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:49.203000 audit[6446]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe8d7b6b0 a2=3 a3=0 items=0 ppid=1 pid=6446 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:49.386750 kernel: audit: type=1006 audit(1769565229.203:872): pid=6446 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 28 01:53:49.386864 kernel: audit: type=1300 audit(1769565229.203:872): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe8d7b6b0 a2=3 a3=0 items=0 ppid=1 pid=6446 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:49.407003 kernel: audit: type=1327 audit(1769565229.203:872): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:53:49.203000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:53:49.412306 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 28 01:53:49.425000 audit[6446]: USER_START pid=6446 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:49.474300 kernel: audit: type=1105 audit(1769565229.425:873): pid=6446 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:49.474502 kernel: audit: type=1103 audit(1769565229.437:874): pid=6450 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:49.437000 audit[6450]: CRED_ACQ pid=6450 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:49.928091 sshd[6450]: Connection closed by 10.0.0.1 port 51766 Jan 28 01:53:49.929171 sshd-session[6446]: pam_unix(sshd:session): session closed for user core Jan 28 01:53:49.926000 audit[6446]: USER_END pid=6446 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:49.969294 systemd[1]: sshd@17-10.0.0.85:22-10.0.0.1:51766.service: Deactivated successfully. Jan 28 01:53:49.987272 systemd[1]: session-19.scope: Deactivated successfully. Jan 28 01:53:50.018531 kernel: audit: type=1106 audit(1769565229.926:875): pid=6446 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:50.019798 kernel: audit: type=1104 audit(1769565229.926:876): pid=6446 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:49.926000 audit[6446]: CRED_DISP pid=6446 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:50.030099 systemd-logind[1586]: Session 19 logged out. Waiting for processes to exit. Jan 28 01:53:50.032148 systemd-logind[1586]: Removed session 19. Jan 28 01:53:49.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.85:22-10.0.0.1:51766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:53:50.235203 containerd[1609]: time="2026-01-28T01:53:50.228168500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:53:50.391212 containerd[1609]: time="2026-01-28T01:53:50.390462491Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:53:50.405874 containerd[1609]: time="2026-01-28T01:53:50.403486005Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:53:50.405874 containerd[1609]: time="2026-01-28T01:53:50.403624120Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 01:53:50.407491 kubelet[2967]: E0128 01:53:50.406761 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:53:50.407491 kubelet[2967]: E0128 01:53:50.406823 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:53:50.407491 kubelet[2967]: E0128 01:53:50.407004 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w48wh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nv2sz_calico-system(be8a6b52-634d-45dc-a492-0c042b64c6df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:53:50.408654 kubelet[2967]: E0128 01:53:50.408592 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nv2sz" podUID="be8a6b52-634d-45dc-a492-0c042b64c6df" Jan 28 01:53:51.192755 containerd[1609]: time="2026-01-28T01:53:51.191207401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:53:51.317358 containerd[1609]: time="2026-01-28T01:53:51.316788625Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:53:51.323900 containerd[1609]: time="2026-01-28T01:53:51.322810552Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:53:51.323900 containerd[1609]: time="2026-01-28T01:53:51.322914866Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 01:53:51.324069 kubelet[2967]: E0128 01:53:51.323082 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:53:51.324069 kubelet[2967]: E0128 01:53:51.323204 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:53:51.324069 kubelet[2967]: E0128 01:53:51.323344 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:11f2d6a54a3d467fbd60c4526f82d473,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2z9qq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7fb5cb5d8-9zmvs_calico-system(f9057416-92cd-485c-b269-9b046834d5f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:53:51.336282 containerd[1609]: time="2026-01-28T01:53:51.333824103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:53:51.439245 containerd[1609]: time="2026-01-28T01:53:51.439079053Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:53:51.446247 containerd[1609]: time="2026-01-28T01:53:51.446076002Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:53:51.447743 containerd[1609]: time="2026-01-28T01:53:51.446206172Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 01:53:51.447888 kubelet[2967]: E0128 01:53:51.446878 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:53:51.447888 kubelet[2967]: E0128 01:53:51.446946 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:53:51.447888 kubelet[2967]: E0128 01:53:51.447107 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2z9qq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7fb5cb5d8-9zmvs_calico-system(f9057416-92cd-485c-b269-9b046834d5f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:53:51.448921 kubelet[2967]: E0128 01:53:51.448848 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb5cb5d8-9zmvs" podUID="f9057416-92cd-485c-b269-9b046834d5f3" Jan 28 01:53:54.198788 kubelet[2967]: E0128 01:53:54.198646 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:53:54.989820 systemd[1]: Started sshd@18-10.0.0.85:22-10.0.0.1:59574.service - OpenSSH per-connection server daemon (10.0.0.1:59574). Jan 28 01:53:55.001561 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:53:55.004753 kernel: audit: type=1130 audit(1769565234.989:878): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.85:22-10.0.0.1:59574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:53:54.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.85:22-10.0.0.1:59574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:53:55.383000 audit[6465]: USER_ACCT pid=6465 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:55.404872 kernel: audit: type=1101 audit(1769565235.383:879): pid=6465 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:55.393326 sshd-session[6465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:53:55.405594 sshd[6465]: Accepted publickey for core from 10.0.0.1 port 59574 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:53:55.386000 audit[6465]: CRED_ACQ pid=6465 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:55.430791 kernel: audit: type=1103 audit(1769565235.386:880): pid=6465 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:55.386000 audit[6465]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef93406e0 a2=3 a3=0 items=0 ppid=1 pid=6465 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:55.465784 systemd-logind[1586]: New session 20 of user core. Jan 28 01:53:55.505353 kernel: audit: type=1006 audit(1769565235.386:881): pid=6465 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 28 01:53:55.505544 kernel: audit: type=1300 audit(1769565235.386:881): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef93406e0 a2=3 a3=0 items=0 ppid=1 pid=6465 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:53:55.523750 kernel: audit: type=1327 audit(1769565235.386:881): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:53:55.386000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:53:55.527077 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 28 01:53:55.580000 audit[6465]: USER_START pid=6465 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:55.626987 kernel: audit: type=1105 audit(1769565235.580:882): pid=6465 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:55.591000 audit[6469]: CRED_ACQ pid=6469 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:55.668623 kernel: audit: type=1103 audit(1769565235.591:883): pid=6469 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:56.177036 sshd[6469]: Connection closed by 10.0.0.1 port 59574 Jan 28 01:53:56.182055 sshd-session[6465]: pam_unix(sshd:session): session closed for user core Jan 28 01:53:56.203000 audit[6465]: USER_END pid=6465 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:56.240367 systemd[1]: sshd@18-10.0.0.85:22-10.0.0.1:59574.service: Deactivated successfully. Jan 28 01:53:56.245654 systemd-logind[1586]: Session 20 logged out. Waiting for processes to exit. Jan 28 01:53:56.255374 systemd[1]: session-20.scope: Deactivated successfully. Jan 28 01:53:56.271304 systemd-logind[1586]: Removed session 20. Jan 28 01:53:56.286785 kernel: audit: type=1106 audit(1769565236.203:884): pid=6465 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:56.203000 audit[6465]: CRED_DISP pid=6465 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:56.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.85:22-10.0.0.1:59574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:53:56.341781 kernel: audit: type=1104 audit(1769565236.203:885): pid=6465 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:53:57.219828 containerd[1609]: time="2026-01-28T01:53:57.217962559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:53:57.378773 containerd[1609]: time="2026-01-28T01:53:57.378454067Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:53:57.382273 containerd[1609]: time="2026-01-28T01:53:57.382219985Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:53:57.382625 containerd[1609]: time="2026-01-28T01:53:57.382566369Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 01:53:57.391825 kubelet[2967]: E0128 01:53:57.391606 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:53:57.397082 kubelet[2967]: E0128 01:53:57.396004 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:53:57.397082 kubelet[2967]: E0128 01:53:57.396314 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4tgn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-849fc56f8-v9sqx_calico-system(67371941-5272-4e0e-84ef-cf7de9065a57): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:53:57.407786 containerd[1609]: time="2026-01-28T01:53:57.398024411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:53:57.408212 kubelet[2967]: E0128 01:53:57.405375 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" podUID="67371941-5272-4e0e-84ef-cf7de9065a57" Jan 28 01:53:57.512352 containerd[1609]: time="2026-01-28T01:53:57.512146228Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:53:57.521775 containerd[1609]: time="2026-01-28T01:53:57.521521370Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:53:57.527154 containerd[1609]: time="2026-01-28T01:53:57.526530547Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:53:57.542814 kubelet[2967]: E0128 01:53:57.541272 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:53:57.542814 kubelet[2967]: E0128 01:53:57.541354 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:53:57.542814 kubelet[2967]: E0128 01:53:57.541853 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jjkwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-654b4ddbfd-mbn64_calico-apiserver(ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:53:57.549220 kubelet[2967]: E0128 01:53:57.545318 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" podUID="ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9" Jan 28 01:53:58.215538 containerd[1609]: time="2026-01-28T01:53:58.214862947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:53:58.360003 containerd[1609]: time="2026-01-28T01:53:58.359945313Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:53:58.366709 containerd[1609]: time="2026-01-28T01:53:58.366543671Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:53:58.366874 containerd[1609]: time="2026-01-28T01:53:58.366757829Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:53:58.368121 kubelet[2967]: E0128 01:53:58.367357 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:53:58.368121 kubelet[2967]: E0128 01:53:58.367482 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:53:58.368121 kubelet[2967]: E0128 01:53:58.367651 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2rq4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-654b4ddbfd-mgclm_calico-apiserver(3ef171ed-8146-4d6a-9063-eb31677aa1d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:53:58.369596 kubelet[2967]: E0128 01:53:58.369537 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" podUID="3ef171ed-8146-4d6a-9063-eb31677aa1d4" Jan 28 01:54:01.213065 containerd[1609]: time="2026-01-28T01:54:01.213012529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:54:01.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.85:22-10.0.0.1:59576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:54:01.245234 systemd[1]: Started sshd@19-10.0.0.85:22-10.0.0.1:59576.service - OpenSSH per-connection server daemon (10.0.0.1:59576). Jan 28 01:54:01.267169 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:54:01.267311 kernel: audit: type=1130 audit(1769565241.244:887): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.85:22-10.0.0.1:59576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:54:01.391558 containerd[1609]: time="2026-01-28T01:54:01.387260444Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:54:01.394482 containerd[1609]: time="2026-01-28T01:54:01.394348846Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:54:01.394636 containerd[1609]: time="2026-01-28T01:54:01.394541193Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 01:54:01.399291 kubelet[2967]: E0128 01:54:01.396868 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:54:01.399291 kubelet[2967]: E0128 01:54:01.396963 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:54:01.399291 kubelet[2967]: E0128 01:54:01.397108 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-882zm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ms9md_calico-system(d33e070d-1851-4242-98ee-97e68b203245): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:54:01.409101 containerd[1609]: time="2026-01-28T01:54:01.400526104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:54:01.538973 containerd[1609]: time="2026-01-28T01:54:01.538507524Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:54:01.539646 containerd[1609]: time="2026-01-28T01:54:01.539250745Z" level=info msg="container event discarded" container=0bb8b1bd5c821a57e0d0dcb49f9dcb87d6b4e86ef33da9e75d79784c9591c0a9 type=CONTAINER_CREATED_EVENT Jan 28 01:54:01.543625 containerd[1609]: time="2026-01-28T01:54:01.541877015Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 01:54:01.543625 containerd[1609]: time="2026-01-28T01:54:01.541919825Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:54:01.543868 kubelet[2967]: E0128 01:54:01.542265 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:54:01.543868 kubelet[2967]: E0128 01:54:01.542328 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:54:01.543868 kubelet[2967]: E0128 01:54:01.542582 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-882zm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ms9md_calico-system(d33e070d-1851-4242-98ee-97e68b203245): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:54:01.547908 kubelet[2967]: E0128 01:54:01.545088 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:54:01.607863 sshd[6518]: Accepted publickey for core from 10.0.0.1 port 59576 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:54:01.605000 audit[6518]: USER_ACCT pid=6518 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:01.625447 sshd-session[6518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:54:01.644505 kernel: audit: type=1101 audit(1769565241.605:888): pid=6518 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:01.621000 audit[6518]: CRED_ACQ pid=6518 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:01.682301 systemd-logind[1586]: New session 21 of user core. Jan 28 01:54:01.697407 kernel: audit: type=1103 audit(1769565241.621:889): pid=6518 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:01.697512 kernel: audit: type=1006 audit(1769565241.621:890): pid=6518 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 28 01:54:01.702782 kernel: audit: type=1300 audit(1769565241.621:890): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde9873ea0 a2=3 a3=0 items=0 ppid=1 pid=6518 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:54:01.621000 audit[6518]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde9873ea0 a2=3 a3=0 items=0 ppid=1 pid=6518 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:54:01.621000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:54:01.793786 kernel: audit: type=1327 audit(1769565241.621:890): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:54:01.798050 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 28 01:54:01.827000 audit[6518]: USER_START pid=6518 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:01.899763 kernel: audit: type=1105 audit(1769565241.827:891): pid=6518 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:01.906000 audit[6522]: CRED_ACQ pid=6522 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:01.953919 kernel: audit: type=1103 audit(1769565241.906:892): pid=6522 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:02.478133 containerd[1609]: time="2026-01-28T01:54:02.475922392Z" level=info msg="container event discarded" container=0bb8b1bd5c821a57e0d0dcb49f9dcb87d6b4e86ef33da9e75d79784c9591c0a9 type=CONTAINER_STARTED_EVENT Jan 28 01:54:02.515314 sshd[6522]: Connection closed by 10.0.0.1 port 59576 Jan 28 01:54:02.528863 sshd-session[6518]: pam_unix(sshd:session): session closed for user core Jan 28 01:54:02.546000 audit[6518]: USER_END pid=6518 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:02.587442 systemd[1]: sshd@19-10.0.0.85:22-10.0.0.1:59576.service: Deactivated successfully. Jan 28 01:54:02.620140 systemd[1]: session-21.scope: Deactivated successfully. Jan 28 01:54:02.633204 systemd-logind[1586]: Session 21 logged out. Waiting for processes to exit. Jan 28 01:54:02.665071 kernel: audit: type=1106 audit(1769565242.546:893): pid=6518 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:02.665254 kernel: audit: type=1104 audit(1769565242.546:894): pid=6518 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:02.546000 audit[6518]: CRED_DISP pid=6518 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:02.667843 systemd-logind[1586]: Removed session 21. Jan 28 01:54:02.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.85:22-10.0.0.1:59576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:54:04.260230 kubelet[2967]: E0128 01:54:04.259185 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nv2sz" podUID="be8a6b52-634d-45dc-a492-0c042b64c6df" Jan 28 01:54:06.228112 kubelet[2967]: E0128 01:54:06.227021 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb5cb5d8-9zmvs" podUID="f9057416-92cd-485c-b269-9b046834d5f3" Jan 28 01:54:07.554943 systemd[1]: Started sshd@20-10.0.0.85:22-10.0.0.1:47654.service - OpenSSH per-connection server daemon (10.0.0.1:47654). Jan 28 01:54:07.569795 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:54:07.570632 kernel: audit: type=1130 audit(1769565247.551:896): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.85:22-10.0.0.1:47654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:54:07.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.85:22-10.0.0.1:47654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:54:07.777000 audit[6536]: USER_ACCT pid=6536 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:07.782829 sshd-session[6536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:54:07.785515 sshd[6536]: Accepted publickey for core from 10.0.0.1 port 47654 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:54:07.780000 audit[6536]: CRED_ACQ pid=6536 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:07.839248 kernel: audit: type=1101 audit(1769565247.777:897): pid=6536 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:07.839429 kernel: audit: type=1103 audit(1769565247.780:898): pid=6536 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:07.864428 systemd-logind[1586]: New session 22 of user core. Jan 28 01:54:07.868024 kernel: audit: type=1006 audit(1769565247.780:899): pid=6536 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 28 01:54:07.780000 audit[6536]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff1e73bc50 a2=3 a3=0 items=0 ppid=1 pid=6536 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:54:07.780000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:54:07.907620 kernel: audit: type=1300 audit(1769565247.780:899): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff1e73bc50 a2=3 a3=0 items=0 ppid=1 pid=6536 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:54:07.907831 kernel: audit: type=1327 audit(1769565247.780:899): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:54:07.918141 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 28 01:54:07.957000 audit[6536]: USER_START pid=6536 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:08.020514 kernel: audit: type=1105 audit(1769565247.957:900): pid=6536 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:07.987000 audit[6540]: CRED_ACQ pid=6540 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:08.088790 kernel: audit: type=1103 audit(1769565247.987:901): pid=6540 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:08.405832 sshd[6540]: Connection closed by 10.0.0.1 port 47654 Jan 28 01:54:08.404833 sshd-session[6536]: pam_unix(sshd:session): session closed for user core Jan 28 01:54:08.412000 audit[6536]: USER_END pid=6536 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:08.498525 kernel: audit: type=1106 audit(1769565248.412:902): pid=6536 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:08.416000 audit[6536]: CRED_DISP pid=6536 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:08.536858 kernel: audit: type=1104 audit(1769565248.416:903): pid=6536 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:08.565444 systemd[1]: sshd@20-10.0.0.85:22-10.0.0.1:47654.service: Deactivated successfully. Jan 28 01:54:08.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.85:22-10.0.0.1:47654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:54:08.603809 systemd[1]: session-22.scope: Deactivated successfully. Jan 28 01:54:08.613562 systemd-logind[1586]: Session 22 logged out. Waiting for processes to exit. Jan 28 01:54:08.622926 systemd-logind[1586]: Removed session 22. Jan 28 01:54:08.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.85:22-10.0.0.1:47668 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:54:08.639517 systemd[1]: Started sshd@21-10.0.0.85:22-10.0.0.1:47668.service - OpenSSH per-connection server daemon (10.0.0.1:47668). Jan 28 01:54:08.934000 audit[6554]: USER_ACCT pid=6554 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:08.938413 sshd[6554]: Accepted publickey for core from 10.0.0.1 port 47668 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:54:08.941000 audit[6554]: CRED_ACQ pid=6554 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:08.973000 audit[6554]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff74083210 a2=3 a3=0 items=0 ppid=1 pid=6554 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:54:08.973000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:54:08.977850 sshd-session[6554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:54:09.030226 systemd-logind[1586]: New session 23 of user core. Jan 28 01:54:09.061163 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 28 01:54:09.098000 audit[6554]: USER_START pid=6554 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:09.106000 audit[6558]: CRED_ACQ pid=6558 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:09.219620 kubelet[2967]: E0128 01:54:09.216166 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" podUID="3ef171ed-8146-4d6a-9063-eb31677aa1d4" Jan 28 01:54:09.227772 kubelet[2967]: E0128 01:54:09.227287 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" podUID="67371941-5272-4e0e-84ef-cf7de9065a57" Jan 28 01:54:09.810160 sshd[6558]: Connection closed by 10.0.0.1 port 47668 Jan 28 01:54:09.809226 sshd-session[6554]: pam_unix(sshd:session): session closed for user core Jan 28 01:54:09.810000 audit[6554]: USER_END pid=6554 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:09.812000 audit[6554]: CRED_DISP pid=6554 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:09.866007 systemd[1]: sshd@21-10.0.0.85:22-10.0.0.1:47668.service: Deactivated successfully. Jan 28 01:54:09.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.85:22-10.0.0.1:47668 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:54:09.908817 systemd[1]: session-23.scope: Deactivated successfully. Jan 28 01:54:09.936973 systemd-logind[1586]: Session 23 logged out. Waiting for processes to exit. Jan 28 01:54:09.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.85:22-10.0.0.1:47670 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:54:09.966314 systemd[1]: Started sshd@22-10.0.0.85:22-10.0.0.1:47670.service - OpenSSH per-connection server daemon (10.0.0.1:47670). Jan 28 01:54:09.998231 systemd-logind[1586]: Removed session 23. Jan 28 01:54:10.328447 sshd[6570]: Accepted publickey for core from 10.0.0.1 port 47670 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:54:10.322000 audit[6570]: USER_ACCT pid=6570 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:10.333000 audit[6570]: CRED_ACQ pid=6570 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:10.335000 audit[6570]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd33c49150 a2=3 a3=0 items=0 ppid=1 pid=6570 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:54:10.335000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:54:10.338278 sshd-session[6570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:54:10.422294 systemd-logind[1586]: New session 24 of user core. Jan 28 01:54:10.469414 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 28 01:54:10.523000 audit[6570]: USER_START pid=6570 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:10.543000 audit[6574]: CRED_ACQ pid=6574 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:11.199552 kubelet[2967]: E0128 01:54:11.197935 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" podUID="ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9" Jan 28 01:54:11.612573 sshd[6574]: Connection closed by 10.0.0.1 port 47670 Jan 28 01:54:11.614513 sshd-session[6570]: pam_unix(sshd:session): session closed for user core Jan 28 01:54:11.627000 audit[6570]: USER_END pid=6570 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:11.628000 audit[6570]: CRED_DISP pid=6570 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:11.641990 systemd[1]: sshd@22-10.0.0.85:22-10.0.0.1:47670.service: Deactivated successfully. Jan 28 01:54:11.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.85:22-10.0.0.1:47670 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:54:11.643175 systemd-logind[1586]: Session 24 logged out. Waiting for processes to exit. Jan 28 01:54:11.667912 systemd[1]: session-24.scope: Deactivated successfully. Jan 28 01:54:11.715130 systemd-logind[1586]: Removed session 24. Jan 28 01:54:14.264914 kubelet[2967]: E0128 01:54:14.264593 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:54:16.103931 update_engine[1589]: I20260128 01:54:16.103439 1589 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 28 01:54:16.103931 update_engine[1589]: I20260128 01:54:16.103633 1589 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 28 01:54:16.116250 update_engine[1589]: I20260128 01:54:16.114228 1589 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 28 01:54:16.119482 update_engine[1589]: I20260128 01:54:16.117267 1589 omaha_request_params.cc:62] Current group set to alpha Jan 28 01:54:16.119482 update_engine[1589]: I20260128 01:54:16.119427 1589 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 28 01:54:16.119482 update_engine[1589]: I20260128 01:54:16.119454 1589 update_attempter.cc:643] Scheduling an action processor start. Jan 28 01:54:16.119482 update_engine[1589]: I20260128 01:54:16.119482 1589 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 28 01:54:16.119789 update_engine[1589]: I20260128 01:54:16.119556 1589 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 28 01:54:16.122455 update_engine[1589]: I20260128 01:54:16.121896 1589 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 28 01:54:16.122455 update_engine[1589]: I20260128 01:54:16.121968 1589 omaha_request_action.cc:272] Request: Jan 28 01:54:16.122455 update_engine[1589]: Jan 28 01:54:16.122455 update_engine[1589]: Jan 28 01:54:16.122455 update_engine[1589]: Jan 28 01:54:16.122455 update_engine[1589]: Jan 28 01:54:16.122455 update_engine[1589]: Jan 28 01:54:16.122455 update_engine[1589]: Jan 28 01:54:16.122455 update_engine[1589]: Jan 28 01:54:16.122455 update_engine[1589]: Jan 28 01:54:16.122455 update_engine[1589]: I20260128 01:54:16.121985 1589 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:54:16.172021 update_engine[1589]: I20260128 01:54:16.171010 1589 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:54:16.174527 update_engine[1589]: I20260128 01:54:16.174086 1589 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:54:16.205040 locksmithd[1652]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 28 01:54:16.208997 update_engine[1589]: E20260128 01:54:16.205984 1589 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 28 01:54:16.208997 update_engine[1589]: I20260128 01:54:16.206157 1589 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 28 01:54:16.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.85:22-10.0.0.1:33166 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:54:16.652940 systemd[1]: Started sshd@23-10.0.0.85:22-10.0.0.1:33166.service - OpenSSH per-connection server daemon (10.0.0.1:33166). Jan 28 01:54:16.679789 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 28 01:54:16.679845 kernel: audit: type=1130 audit(1769565256.652:923): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.85:22-10.0.0.1:33166 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:54:16.982000 audit[6591]: USER_ACCT pid=6591 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:16.994392 sshd[6591]: Accepted publickey for core from 10.0.0.1 port 33166 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:54:17.005500 sshd-session[6591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:54:17.030397 kernel: audit: type=1101 audit(1769565256.982:924): pid=6591 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:16.995000 audit[6591]: CRED_ACQ pid=6591 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:17.053132 systemd-logind[1586]: New session 25 of user core. Jan 28 01:54:17.100766 kernel: audit: type=1103 audit(1769565256.995:925): pid=6591 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:17.100965 kernel: audit: type=1006 audit(1769565256.995:926): pid=6591 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 28 01:54:16.995000 audit[6591]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd690b3570 a2=3 a3=0 items=0 ppid=1 pid=6591 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:54:17.166520 kernel: audit: type=1300 audit(1769565256.995:926): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd690b3570 a2=3 a3=0 items=0 ppid=1 pid=6591 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:54:17.167188 kernel: audit: type=1327 audit(1769565256.995:926): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:54:16.995000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:54:17.167954 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 28 01:54:17.202853 kernel: audit: type=1105 audit(1769565257.190:927): pid=6591 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:17.190000 audit[6591]: USER_START pid=6591 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:17.213130 kubelet[2967]: E0128 01:54:17.208771 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb5cb5d8-9zmvs" podUID="f9057416-92cd-485c-b269-9b046834d5f3" Jan 28 01:54:17.319087 kernel: audit: type=1103 audit(1769565257.214:928): pid=6595 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:17.214000 audit[6595]: CRED_ACQ pid=6595 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:17.737482 sshd[6595]: Connection closed by 10.0.0.1 port 33166 Jan 28 01:54:17.738514 sshd-session[6591]: pam_unix(sshd:session): session closed for user core Jan 28 01:54:17.736000 audit[6591]: USER_END pid=6591 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:17.751491 systemd[1]: sshd@23-10.0.0.85:22-10.0.0.1:33166.service: Deactivated successfully. Jan 28 01:54:17.765903 systemd[1]: session-25.scope: Deactivated successfully. Jan 28 01:54:17.779221 systemd-logind[1586]: Session 25 logged out. Waiting for processes to exit. Jan 28 01:54:17.741000 audit[6591]: CRED_DISP pid=6591 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:17.794627 systemd-logind[1586]: Removed session 25. Jan 28 01:54:17.824203 kernel: audit: type=1106 audit(1769565257.736:929): pid=6591 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:17.824407 kernel: audit: type=1104 audit(1769565257.741:930): pid=6591 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:17.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.85:22-10.0.0.1:33166 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:54:19.195544 kubelet[2967]: E0128 01:54:19.193003 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nv2sz" podUID="be8a6b52-634d-45dc-a492-0c042b64c6df" Jan 28 01:54:20.226070 kubelet[2967]: E0128 01:54:20.225750 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" podUID="67371941-5272-4e0e-84ef-cf7de9065a57" Jan 28 01:54:21.228725 kubelet[2967]: E0128 01:54:21.223996 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" podUID="3ef171ed-8146-4d6a-9063-eb31677aa1d4" Jan 28 01:54:22.818241 systemd[1]: Started sshd@24-10.0.0.85:22-10.0.0.1:52898.service - OpenSSH per-connection server daemon (10.0.0.1:52898). Jan 28 01:54:22.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.85:22-10.0.0.1:52898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:54:22.839871 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:54:22.839992 kernel: audit: type=1130 audit(1769565262.827:932): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.85:22-10.0.0.1:52898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:54:23.017000 audit[6609]: USER_ACCT pid=6609 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:23.020124 sshd[6609]: Accepted publickey for core from 10.0.0.1 port 52898 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:54:23.034000 sshd-session[6609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:54:23.026000 audit[6609]: CRED_ACQ pid=6609 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:23.087531 systemd-logind[1586]: New session 26 of user core. Jan 28 01:54:23.127754 kernel: audit: type=1101 audit(1769565263.017:933): pid=6609 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:23.127915 kernel: audit: type=1103 audit(1769565263.026:934): pid=6609 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:23.163476 kernel: audit: type=1006 audit(1769565263.026:935): pid=6609 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 28 01:54:23.163654 kernel: audit: type=1300 audit(1769565263.026:935): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde8909a40 a2=3 a3=0 items=0 ppid=1 pid=6609 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:54:23.026000 audit[6609]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde8909a40 a2=3 a3=0 items=0 ppid=1 pid=6609 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:54:23.171162 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 28 01:54:23.191257 kernel: audit: type=1327 audit(1769565263.026:935): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:54:23.026000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:54:23.210000 audit[6609]: USER_START pid=6609 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:23.303523 kernel: audit: type=1105 audit(1769565263.210:936): pid=6609 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:23.304778 kernel: audit: type=1103 audit(1769565263.234:937): pid=6613 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:23.234000 audit[6613]: CRED_ACQ pid=6613 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:24.117167 sshd[6613]: Connection closed by 10.0.0.1 port 52898 Jan 28 01:54:24.129117 sshd-session[6609]: pam_unix(sshd:session): session closed for user core Jan 28 01:54:24.146000 audit[6609]: USER_END pid=6609 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:24.172840 systemd[1]: sshd@24-10.0.0.85:22-10.0.0.1:52898.service: Deactivated successfully. Jan 28 01:54:24.175090 systemd-logind[1586]: Session 26 logged out. Waiting for processes to exit. Jan 28 01:54:24.224786 systemd[1]: session-26.scope: Deactivated successfully. Jan 28 01:54:24.146000 audit[6609]: CRED_DISP pid=6609 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:24.268872 systemd-logind[1586]: Removed session 26. Jan 28 01:54:24.300589 kernel: audit: type=1106 audit(1769565264.146:938): pid=6609 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:24.300818 kernel: audit: type=1104 audit(1769565264.146:939): pid=6609 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:24.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.85:22-10.0.0.1:52898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:54:25.192193 kubelet[2967]: E0128 01:54:25.192129 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" podUID="ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9" Jan 28 01:54:26.824880 update_engine[1589]: I20260128 01:54:26.821799 1589 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:54:26.824880 update_engine[1589]: I20260128 01:54:26.821923 1589 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:54:26.824880 update_engine[1589]: I20260128 01:54:26.822469 1589 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:54:26.847086 update_engine[1589]: E20260128 01:54:26.842881 1589 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 28 01:54:26.847086 update_engine[1589]: I20260128 01:54:26.843024 1589 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 28 01:54:29.191839 kubelet[2967]: E0128 01:54:29.191082 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:54:29.263584 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:54:29.263878 kernel: audit: type=1130 audit(1769565269.242:941): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.85:22-10.0.0.1:52908 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:54:29.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.85:22-10.0.0.1:52908 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:54:29.243542 systemd[1]: Started sshd@25-10.0.0.85:22-10.0.0.1:52908.service - OpenSSH per-connection server daemon (10.0.0.1:52908). Jan 28 01:54:29.308763 kubelet[2967]: E0128 01:54:29.304335 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb5cb5d8-9zmvs" podUID="f9057416-92cd-485c-b269-9b046834d5f3" Jan 28 01:54:29.308763 kubelet[2967]: E0128 01:54:29.304617 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:54:29.600000 audit[6628]: USER_ACCT pid=6628 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:29.608996 sshd[6628]: Accepted publickey for core from 10.0.0.1 port 52908 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:54:29.609467 sshd-session[6628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:54:29.602000 audit[6628]: CRED_ACQ pid=6628 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:29.644886 systemd-logind[1586]: New session 27 of user core. Jan 28 01:54:29.678006 kernel: audit: type=1101 audit(1769565269.600:942): pid=6628 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:29.678160 kernel: audit: type=1103 audit(1769565269.602:943): pid=6628 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:29.678208 kernel: audit: type=1006 audit(1769565269.602:944): pid=6628 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 28 01:54:29.602000 audit[6628]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffccc6dafb0 a2=3 a3=0 items=0 ppid=1 pid=6628 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:54:29.727393 kernel: audit: type=1300 audit(1769565269.602:944): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffccc6dafb0 a2=3 a3=0 items=0 ppid=1 pid=6628 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:54:29.602000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:54:29.734528 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 28 01:54:29.753178 kernel: audit: type=1327 audit(1769565269.602:944): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:54:29.756000 audit[6628]: USER_START pid=6628 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:29.866090 kernel: audit: type=1105 audit(1769565269.756:945): pid=6628 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:29.866270 kernel: audit: type=1103 audit(1769565269.772:946): pid=6632 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:29.772000 audit[6632]: CRED_ACQ pid=6632 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:30.141617 sshd[6632]: Connection closed by 10.0.0.1 port 52908 Jan 28 01:54:30.140894 sshd-session[6628]: pam_unix(sshd:session): session closed for user core Jan 28 01:54:30.157000 audit[6628]: USER_END pid=6628 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:30.197031 systemd[1]: sshd@25-10.0.0.85:22-10.0.0.1:52908.service: Deactivated successfully. Jan 28 01:54:30.157000 audit[6628]: CRED_DISP pid=6628 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:30.224514 systemd[1]: session-27.scope: Deactivated successfully. Jan 28 01:54:30.254745 systemd-logind[1586]: Session 27 logged out. Waiting for processes to exit. Jan 28 01:54:30.264844 systemd-logind[1586]: Removed session 27. Jan 28 01:54:30.277594 kernel: audit: type=1106 audit(1769565270.157:947): pid=6628 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:30.277834 kernel: audit: type=1104 audit(1769565270.157:948): pid=6628 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:30.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.85:22-10.0.0.1:52908 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:54:31.241245 kubelet[2967]: E0128 01:54:31.240321 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" podUID="67371941-5272-4e0e-84ef-cf7de9065a57" Jan 28 01:54:34.190596 kubelet[2967]: E0128 01:54:34.189943 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:54:34.205132 containerd[1609]: time="2026-01-28T01:54:34.196218013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:54:34.236636 kubelet[2967]: E0128 01:54:34.234470 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" podUID="3ef171ed-8146-4d6a-9063-eb31677aa1d4" Jan 28 01:54:34.368469 containerd[1609]: time="2026-01-28T01:54:34.366995634Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:54:34.383654 containerd[1609]: time="2026-01-28T01:54:34.383112991Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:54:34.383654 containerd[1609]: time="2026-01-28T01:54:34.383216534Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 01:54:34.387854 kubelet[2967]: E0128 01:54:34.385549 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:54:34.387854 kubelet[2967]: E0128 01:54:34.385612 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:54:34.387854 kubelet[2967]: E0128 01:54:34.385879 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w48wh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nv2sz_calico-system(be8a6b52-634d-45dc-a492-0c042b64c6df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:54:34.391394 kubelet[2967]: E0128 01:54:34.390774 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nv2sz" podUID="be8a6b52-634d-45dc-a492-0c042b64c6df" Jan 28 01:54:35.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.85:22-10.0.0.1:33960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:54:35.195209 systemd[1]: Started sshd@26-10.0.0.85:22-10.0.0.1:33960.service - OpenSSH per-connection server daemon (10.0.0.1:33960). Jan 28 01:54:35.202457 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:54:35.202557 kernel: audit: type=1130 audit(1769565275.193:950): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.85:22-10.0.0.1:33960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:54:35.621413 sshd[6672]: Accepted publickey for core from 10.0.0.1 port 33960 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:54:35.619000 audit[6672]: USER_ACCT pid=6672 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:35.632428 sshd-session[6672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:54:35.698884 kernel: audit: type=1101 audit(1769565275.619:951): pid=6672 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:35.626000 audit[6672]: CRED_ACQ pid=6672 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:35.714430 systemd-logind[1586]: New session 28 of user core. Jan 28 01:54:35.791889 kernel: audit: type=1103 audit(1769565275.626:952): pid=6672 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:35.792074 kernel: audit: type=1006 audit(1769565275.626:953): pid=6672 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 28 01:54:35.626000 audit[6672]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc8efd220 a2=3 a3=0 items=0 ppid=1 pid=6672 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:54:35.888089 kernel: audit: type=1300 audit(1769565275.626:953): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc8efd220 a2=3 a3=0 items=0 ppid=1 pid=6672 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:54:35.626000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:54:35.916917 kernel: audit: type=1327 audit(1769565275.626:953): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:54:35.925481 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 28 01:54:35.971000 audit[6672]: USER_START pid=6672 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:36.046009 kernel: audit: type=1105 audit(1769565275.971:954): pid=6672 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:35.994000 audit[6676]: CRED_ACQ pid=6676 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:36.160072 kernel: audit: type=1103 audit(1769565275.994:955): pid=6676 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:36.232134 kubelet[2967]: E0128 01:54:36.231816 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" podUID="ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9" Jan 28 01:54:36.824019 update_engine[1589]: I20260128 01:54:36.823110 1589 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:54:36.824019 update_engine[1589]: I20260128 01:54:36.823246 1589 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:54:36.824019 update_engine[1589]: I20260128 01:54:36.823956 1589 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:54:36.915538 update_engine[1589]: E20260128 01:54:36.911804 1589 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 28 01:54:36.915538 update_engine[1589]: I20260128 01:54:36.911959 1589 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 28 01:54:36.956514 sshd[6676]: Connection closed by 10.0.0.1 port 33960 Jan 28 01:54:36.960542 sshd-session[6672]: pam_unix(sshd:session): session closed for user core Jan 28 01:54:36.970000 audit[6672]: USER_END pid=6672 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:36.982967 systemd[1]: sshd@26-10.0.0.85:22-10.0.0.1:33960.service: Deactivated successfully. Jan 28 01:54:36.991167 systemd[1]: session-28.scope: Deactivated successfully. Jan 28 01:54:37.003144 systemd-logind[1586]: Session 28 logged out. Waiting for processes to exit. Jan 28 01:54:37.006620 systemd-logind[1586]: Removed session 28. Jan 28 01:54:37.055978 kernel: audit: type=1106 audit(1769565276.970:956): pid=6672 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:37.056113 kernel: audit: type=1104 audit(1769565276.971:957): pid=6672 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:36.971000 audit[6672]: CRED_DISP pid=6672 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:36.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.85:22-10.0.0.1:33960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:54:40.201756 kubelet[2967]: E0128 01:54:40.188405 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:54:41.221317 kubelet[2967]: E0128 01:54:41.208731 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:54:42.024223 systemd[1]: Started sshd@27-10.0.0.85:22-10.0.0.1:33976.service - OpenSSH per-connection server daemon (10.0.0.1:33976). Jan 28 01:54:42.044032 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:54:42.045339 kernel: audit: type=1130 audit(1769565282.022:959): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.85:22-10.0.0.1:33976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:54:42.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.85:22-10.0.0.1:33976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:54:42.214302 containerd[1609]: time="2026-01-28T01:54:42.213099274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:54:42.386383 containerd[1609]: time="2026-01-28T01:54:42.385993984Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:54:42.398028 containerd[1609]: time="2026-01-28T01:54:42.397972320Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:54:42.398230 containerd[1609]: time="2026-01-28T01:54:42.398210723Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 01:54:42.399922 kubelet[2967]: E0128 01:54:42.399175 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:54:42.399922 kubelet[2967]: E0128 01:54:42.399330 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:54:42.399922 kubelet[2967]: E0128 01:54:42.399588 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4tgn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-849fc56f8-v9sqx_calico-system(67371941-5272-4e0e-84ef-cf7de9065a57): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:54:42.406396 kubelet[2967]: E0128 01:54:42.401231 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" podUID="67371941-5272-4e0e-84ef-cf7de9065a57" Jan 28 01:54:42.435000 audit[6700]: USER_ACCT pid=6700 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:42.463899 sshd[6700]: Accepted publickey for core from 10.0.0.1 port 33976 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:54:42.476515 sshd-session[6700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:54:42.511840 kernel: audit: type=1101 audit(1769565282.435:960): pid=6700 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:42.463000 audit[6700]: CRED_ACQ pid=6700 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:42.583356 kernel: audit: type=1103 audit(1769565282.463:961): pid=6700 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:42.583500 kernel: audit: type=1006 audit(1769565282.473:962): pid=6700 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jan 28 01:54:42.593835 systemd-logind[1586]: New session 29 of user core. Jan 28 01:54:42.473000 audit[6700]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe529be3e0 a2=3 a3=0 items=0 ppid=1 pid=6700 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:54:42.632326 kernel: audit: type=1300 audit(1769565282.473:962): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe529be3e0 a2=3 a3=0 items=0 ppid=1 pid=6700 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:54:42.685629 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 28 01:54:42.473000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:54:42.726774 kernel: audit: type=1327 audit(1769565282.473:962): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:54:42.732000 audit[6700]: USER_START pid=6700 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:42.796950 kernel: audit: type=1105 audit(1769565282.732:963): pid=6700 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:42.736000 audit[6704]: CRED_ACQ pid=6704 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:42.812991 kernel: audit: type=1103 audit(1769565282.736:964): pid=6704 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:43.232618 containerd[1609]: time="2026-01-28T01:54:43.221165200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:54:43.447437 containerd[1609]: time="2026-01-28T01:54:43.447373310Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:54:43.488940 containerd[1609]: time="2026-01-28T01:54:43.482162146Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:54:43.488940 containerd[1609]: time="2026-01-28T01:54:43.486530560Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 01:54:43.489159 kubelet[2967]: E0128 01:54:43.486970 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:54:43.489159 kubelet[2967]: E0128 01:54:43.487034 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:54:43.531380 kubelet[2967]: E0128 01:54:43.487234 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-882zm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ms9md_calico-system(d33e070d-1851-4242-98ee-97e68b203245): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:54:43.554374 containerd[1609]: time="2026-01-28T01:54:43.548491466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:54:43.604904 sshd[6704]: Connection closed by 10.0.0.1 port 33976 Jan 28 01:54:43.601582 sshd-session[6700]: pam_unix(sshd:session): session closed for user core Jan 28 01:54:43.609000 audit[6700]: USER_END pid=6700 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:43.635984 systemd[1]: sshd@27-10.0.0.85:22-10.0.0.1:33976.service: Deactivated successfully. Jan 28 01:54:43.638334 systemd-logind[1586]: Session 29 logged out. Waiting for processes to exit. Jan 28 01:54:43.643478 systemd[1]: session-29.scope: Deactivated successfully. Jan 28 01:54:43.651963 systemd-logind[1586]: Removed session 29. Jan 28 01:54:43.609000 audit[6700]: CRED_DISP pid=6700 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:43.748239 containerd[1609]: time="2026-01-28T01:54:43.747608984Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:54:43.765967 kernel: audit: type=1106 audit(1769565283.609:965): pid=6700 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:43.766090 kernel: audit: type=1104 audit(1769565283.609:966): pid=6700 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:43.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.85:22-10.0.0.1:33976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:54:43.766315 kubelet[2967]: E0128 01:54:43.763535 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:54:43.766315 kubelet[2967]: E0128 01:54:43.763591 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:54:43.766315 kubelet[2967]: E0128 01:54:43.763826 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-882zm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ms9md_calico-system(d33e070d-1851-4242-98ee-97e68b203245): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:54:43.766617 containerd[1609]: time="2026-01-28T01:54:43.760965623Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:54:43.766617 containerd[1609]: time="2026-01-28T01:54:43.762776411Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 01:54:43.767182 kubelet[2967]: E0128 01:54:43.767031 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:54:44.217134 containerd[1609]: time="2026-01-28T01:54:44.212167322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:54:44.324210 containerd[1609]: time="2026-01-28T01:54:44.317946308Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:54:44.346372 containerd[1609]: time="2026-01-28T01:54:44.329606395Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:54:44.346372 containerd[1609]: time="2026-01-28T01:54:44.329859976Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 01:54:44.346606 kubelet[2967]: E0128 01:54:44.330418 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:54:44.346606 kubelet[2967]: E0128 01:54:44.330486 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:54:44.346606 kubelet[2967]: E0128 01:54:44.330641 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:11f2d6a54a3d467fbd60c4526f82d473,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2z9qq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7fb5cb5d8-9zmvs_calico-system(f9057416-92cd-485c-b269-9b046834d5f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:54:44.353231 containerd[1609]: time="2026-01-28T01:54:44.348178754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:54:44.521986 containerd[1609]: time="2026-01-28T01:54:44.521837332Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:54:44.551643 containerd[1609]: time="2026-01-28T01:54:44.551533838Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:54:44.555186 containerd[1609]: time="2026-01-28T01:54:44.551789240Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 01:54:44.561440 kubelet[2967]: E0128 01:54:44.556951 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:54:44.561440 kubelet[2967]: E0128 01:54:44.557008 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:54:44.561440 kubelet[2967]: E0128 01:54:44.557151 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2z9qq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7fb5cb5d8-9zmvs_calico-system(f9057416-92cd-485c-b269-9b046834d5f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:54:44.561440 kubelet[2967]: E0128 01:54:44.560784 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb5cb5d8-9zmvs" podUID="f9057416-92cd-485c-b269-9b046834d5f3" Jan 28 01:54:45.220318 kubelet[2967]: E0128 01:54:45.220103 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nv2sz" podUID="be8a6b52-634d-45dc-a492-0c042b64c6df" Jan 28 01:54:46.821037 update_engine[1589]: I20260128 01:54:46.820557 1589 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:54:46.823965 update_engine[1589]: I20260128 01:54:46.822438 1589 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:54:46.823965 update_engine[1589]: I20260128 01:54:46.823175 1589 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:54:46.863830 update_engine[1589]: E20260128 01:54:46.855110 1589 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 28 01:54:46.863830 update_engine[1589]: I20260128 01:54:46.855371 1589 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 28 01:54:46.863830 update_engine[1589]: I20260128 01:54:46.855397 1589 omaha_request_action.cc:617] Omaha request response: Jan 28 01:54:46.863830 update_engine[1589]: E20260128 01:54:46.855528 1589 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 28 01:54:46.863830 update_engine[1589]: I20260128 01:54:46.855573 1589 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 28 01:54:46.863830 update_engine[1589]: I20260128 01:54:46.855585 1589 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 28 01:54:46.863830 update_engine[1589]: I20260128 01:54:46.855597 1589 update_attempter.cc:306] Processing Done. Jan 28 01:54:46.863830 update_engine[1589]: E20260128 01:54:46.855618 1589 update_attempter.cc:619] Update failed. Jan 28 01:54:46.863830 update_engine[1589]: I20260128 01:54:46.855630 1589 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 28 01:54:46.863830 update_engine[1589]: I20260128 01:54:46.855639 1589 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 28 01:54:46.863830 update_engine[1589]: I20260128 01:54:46.855649 1589 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 28 01:54:46.863830 update_engine[1589]: I20260128 01:54:46.855854 1589 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 28 01:54:46.863830 update_engine[1589]: I20260128 01:54:46.855901 1589 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 28 01:54:46.863830 update_engine[1589]: I20260128 01:54:46.855913 1589 omaha_request_action.cc:272] Request: Jan 28 01:54:46.863830 update_engine[1589]: Jan 28 01:54:46.863830 update_engine[1589]: Jan 28 01:54:46.864514 update_engine[1589]: Jan 28 01:54:46.864514 update_engine[1589]: Jan 28 01:54:46.864514 update_engine[1589]: Jan 28 01:54:46.864514 update_engine[1589]: Jan 28 01:54:46.864514 update_engine[1589]: I20260128 01:54:46.855924 1589 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:54:46.864514 update_engine[1589]: I20260128 01:54:46.863641 1589 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:54:46.872341 locksmithd[1652]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 28 01:54:46.892088 update_engine[1589]: I20260128 01:54:46.889892 1589 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:54:46.905850 update_engine[1589]: E20260128 01:54:46.905651 1589 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 28 01:54:46.906086 update_engine[1589]: I20260128 01:54:46.906054 1589 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 28 01:54:46.906208 update_engine[1589]: I20260128 01:54:46.906181 1589 omaha_request_action.cc:617] Omaha request response: Jan 28 01:54:46.906355 update_engine[1589]: I20260128 01:54:46.906328 1589 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 28 01:54:46.906455 update_engine[1589]: I20260128 01:54:46.906428 1589 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 28 01:54:46.906538 update_engine[1589]: I20260128 01:54:46.906513 1589 update_attempter.cc:306] Processing Done. Jan 28 01:54:46.906629 update_engine[1589]: I20260128 01:54:46.906606 1589 update_attempter.cc:310] Error event sent. Jan 28 01:54:46.906827 update_engine[1589]: I20260128 01:54:46.906796 1589 update_check_scheduler.cc:74] Next update check in 49m15s Jan 28 01:54:46.913497 locksmithd[1652]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 28 01:54:47.208597 containerd[1609]: time="2026-01-28T01:54:47.200203585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:54:47.352997 containerd[1609]: time="2026-01-28T01:54:47.352942313Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:54:47.373109 containerd[1609]: time="2026-01-28T01:54:47.372988686Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:54:47.373109 containerd[1609]: time="2026-01-28T01:54:47.373107126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:54:47.373641 kubelet[2967]: E0128 01:54:47.373418 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:54:47.373641 kubelet[2967]: E0128 01:54:47.373484 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:54:47.384801 kubelet[2967]: E0128 01:54:47.373642 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jjkwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-654b4ddbfd-mbn64_calico-apiserver(ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:54:47.384801 kubelet[2967]: E0128 01:54:47.382786 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" podUID="ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9" Jan 28 01:54:48.645885 systemd[1]: Started sshd@28-10.0.0.85:22-10.0.0.1:39204.service - OpenSSH per-connection server daemon (10.0.0.1:39204). Jan 28 01:54:48.652857 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:54:48.652948 kernel: audit: type=1130 audit(1769565288.644:968): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.85:22-10.0.0.1:39204 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:54:48.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.85:22-10.0.0.1:39204 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:54:48.881000 audit[6730]: USER_ACCT pid=6730 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:48.887794 sshd[6730]: Accepted publickey for core from 10.0.0.1 port 39204 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:54:48.901117 sshd-session[6730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:54:48.939852 systemd-logind[1586]: New session 30 of user core. Jan 28 01:54:48.896000 audit[6730]: CRED_ACQ pid=6730 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:48.997126 kernel: audit: type=1101 audit(1769565288.881:969): pid=6730 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:48.998924 kernel: audit: type=1103 audit(1769565288.896:970): pid=6730 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:48.998981 kernel: audit: type=1006 audit(1769565288.896:971): pid=6730 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Jan 28 01:54:49.030104 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 28 01:54:48.896000 audit[6730]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe7afcd6f0 a2=3 a3=0 items=0 ppid=1 pid=6730 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:54:49.070302 kernel: audit: type=1300 audit(1769565288.896:971): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe7afcd6f0 a2=3 a3=0 items=0 ppid=1 pid=6730 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:54:49.070838 kernel: audit: type=1327 audit(1769565288.896:971): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:54:48.896000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:54:49.050000 audit[6730]: USER_START pid=6730 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:49.184560 kernel: audit: type=1105 audit(1769565289.050:972): pid=6730 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:49.188524 kernel: audit: type=1103 audit(1769565289.063:973): pid=6734 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:49.063000 audit[6734]: CRED_ACQ pid=6734 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:49.206116 kubelet[2967]: E0128 01:54:49.205912 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:54:49.244804 containerd[1609]: time="2026-01-28T01:54:49.229013530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:54:49.409206 containerd[1609]: time="2026-01-28T01:54:49.409102898Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:54:49.417028 containerd[1609]: time="2026-01-28T01:54:49.416978926Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:54:49.417331 containerd[1609]: time="2026-01-28T01:54:49.417297137Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:54:49.417642 kubelet[2967]: E0128 01:54:49.417594 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:54:49.418501 kubelet[2967]: E0128 01:54:49.418083 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:54:49.426870 kubelet[2967]: E0128 01:54:49.426795 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2rq4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-654b4ddbfd-mgclm_calico-apiserver(3ef171ed-8146-4d6a-9063-eb31677aa1d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:54:49.429884 kubelet[2967]: E0128 01:54:49.429342 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" podUID="3ef171ed-8146-4d6a-9063-eb31677aa1d4" Jan 28 01:54:49.627360 sshd[6734]: Connection closed by 10.0.0.1 port 39204 Jan 28 01:54:49.622951 sshd-session[6730]: pam_unix(sshd:session): session closed for user core Jan 28 01:54:49.642000 audit[6730]: USER_END pid=6730 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:49.667150 systemd[1]: sshd@28-10.0.0.85:22-10.0.0.1:39204.service: Deactivated successfully. Jan 28 01:54:49.686490 systemd[1]: session-30.scope: Deactivated successfully. Jan 28 01:54:49.723648 kernel: audit: type=1106 audit(1769565289.642:974): pid=6730 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:49.724851 kernel: audit: type=1104 audit(1769565289.642:975): pid=6730 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:49.642000 audit[6730]: CRED_DISP pid=6730 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:49.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.85:22-10.0.0.1:39204 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:54:49.705798 systemd-logind[1586]: Session 30 logged out. Waiting for processes to exit. Jan 28 01:54:49.719396 systemd-logind[1586]: Removed session 30. Jan 28 01:54:51.187130 kubelet[2967]: E0128 01:54:51.186358 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:54:54.218603 kubelet[2967]: E0128 01:54:54.216520 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" podUID="67371941-5272-4e0e-84ef-cf7de9065a57" Jan 28 01:54:54.218603 kubelet[2967]: E0128 01:54:54.216597 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:54:54.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.85:22-10.0.0.1:53314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:54:54.681099 systemd[1]: Started sshd@29-10.0.0.85:22-10.0.0.1:53314.service - OpenSSH per-connection server daemon (10.0.0.1:53314). Jan 28 01:54:54.703073 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:54:54.703363 kernel: audit: type=1130 audit(1769565294.680:977): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.85:22-10.0.0.1:53314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:54:54.942000 audit[6757]: USER_ACCT pid=6757 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:54.945316 sshd[6757]: Accepted publickey for core from 10.0.0.1 port 53314 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:54:54.953204 sshd-session[6757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:54:54.989532 systemd-logind[1586]: New session 31 of user core. Jan 28 01:54:54.948000 audit[6757]: CRED_ACQ pid=6757 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:55.049940 kernel: audit: type=1101 audit(1769565294.942:978): pid=6757 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:55.050115 kernel: audit: type=1103 audit(1769565294.948:979): pid=6757 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:55.056337 kernel: audit: type=1006 audit(1769565294.948:980): pid=6757 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=31 res=1 Jan 28 01:54:55.078610 kernel: audit: type=1300 audit(1769565294.948:980): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff695fa820 a2=3 a3=0 items=0 ppid=1 pid=6757 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:54:54.948000 audit[6757]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff695fa820 a2=3 a3=0 items=0 ppid=1 pid=6757 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:54:55.122382 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 28 01:54:54.948000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:54:55.138000 audit[6757]: USER_START pid=6757 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:55.195527 kernel: audit: type=1327 audit(1769565294.948:980): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:54:55.195663 kernel: audit: type=1105 audit(1769565295.138:981): pid=6757 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:55.145000 audit[6761]: CRED_ACQ pid=6761 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:55.223292 kernel: audit: type=1103 audit(1769565295.145:982): pid=6761 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:55.829938 sshd[6761]: Connection closed by 10.0.0.1 port 53314 Jan 28 01:54:55.831970 sshd-session[6757]: pam_unix(sshd:session): session closed for user core Jan 28 01:54:55.860000 audit[6757]: USER_END pid=6757 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:55.887311 systemd[1]: sshd@29-10.0.0.85:22-10.0.0.1:53314.service: Deactivated successfully. Jan 28 01:54:55.908624 kernel: audit: type=1106 audit(1769565295.860:983): pid=6757 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:55.909095 kernel: audit: type=1104 audit(1769565295.860:984): pid=6757 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:55.860000 audit[6757]: CRED_DISP pid=6757 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:54:55.897610 systemd[1]: session-31.scope: Deactivated successfully. Jan 28 01:54:55.909423 systemd-logind[1586]: Session 31 logged out. Waiting for processes to exit. Jan 28 01:54:55.911642 systemd-logind[1586]: Removed session 31. Jan 28 01:54:55.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.85:22-10.0.0.1:53314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:54:56.211445 kubelet[2967]: E0128 01:54:56.209639 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nv2sz" podUID="be8a6b52-634d-45dc-a492-0c042b64c6df" Jan 28 01:54:57.211005 kubelet[2967]: E0128 01:54:57.203632 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:54:57.219413 kubelet[2967]: E0128 01:54:57.218796 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb5cb5d8-9zmvs" podUID="f9057416-92cd-485c-b269-9b046834d5f3" Jan 28 01:55:00.480642 kubelet[2967]: E0128 01:55:00.473354 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:55:00.942903 kubelet[2967]: E0128 01:55:00.816441 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" podUID="ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9" Jan 28 01:55:01.533938 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:55:01.534395 kernel: audit: type=1130 audit(1769565301.490:986): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.85:22-10.0.0.1:53316 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:55:01.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.85:22-10.0.0.1:53316 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:55:01.492089 systemd[1]: Started sshd@30-10.0.0.85:22-10.0.0.1:53316.service - OpenSSH per-connection server daemon (10.0.0.1:53316). Jan 28 01:55:02.076000 audit[6781]: USER_ACCT pid=6781 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:02.140926 sshd[6781]: Accepted publickey for core from 10.0.0.1 port 53316 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:55:02.161201 sshd-session[6781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:55:02.179835 kernel: audit: type=1101 audit(1769565302.076:987): pid=6781 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:02.156000 audit[6781]: CRED_ACQ pid=6781 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:02.291277 kernel: audit: type=1103 audit(1769565302.156:988): pid=6781 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:02.378032 kernel: audit: type=1006 audit(1769565302.158:989): pid=6781 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=32 res=1 Jan 28 01:55:02.400286 kubelet[2967]: E0128 01:55:02.267847 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" podUID="3ef171ed-8146-4d6a-9063-eb31677aa1d4" Jan 28 01:55:02.158000 audit[6781]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe1ec8d350 a2=3 a3=0 items=0 ppid=1 pid=6781 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:55:02.612098 kernel: audit: type=1300 audit(1769565302.158:989): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe1ec8d350 a2=3 a3=0 items=0 ppid=1 pid=6781 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:55:02.689053 kernel: audit: type=1327 audit(1769565302.158:989): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:55:02.158000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:55:02.569377 systemd-logind[1586]: New session 32 of user core. Jan 28 01:55:02.833926 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 28 01:55:03.122000 audit[6781]: USER_START pid=6781 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:03.299521 kernel: audit: type=1105 audit(1769565303.122:990): pid=6781 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:03.299900 kernel: audit: type=1103 audit(1769565303.229:991): pid=6804 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:03.229000 audit[6804]: CRED_ACQ pid=6804 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:05.524066 kubelet[2967]: E0128 01:55:05.514034 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" podUID="67371941-5272-4e0e-84ef-cf7de9065a57" Jan 28 01:55:05.580787 kubelet[2967]: E0128 01:55:05.580643 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:55:07.398882 kubelet[2967]: E0128 01:55:07.398647 2967 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.171s" Jan 28 01:55:07.627818 kubelet[2967]: E0128 01:55:07.627503 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb5cb5d8-9zmvs" podUID="f9057416-92cd-485c-b269-9b046834d5f3" Jan 28 01:55:12.736600 containerd[1609]: time="2026-01-28T01:55:12.715287309Z" level=info msg="container event discarded" container=0bb8b1bd5c821a57e0d0dcb49f9dcb87d6b4e86ef33da9e75d79784c9591c0a9 type=CONTAINER_STOPPED_EVENT Jan 28 01:55:12.934820 containerd[1609]: time="2026-01-28T01:55:12.929346125Z" level=info msg="container event discarded" container=97307db3a56847c2b3ea5411d14db48f22041ad8fd6281809277fb982b642a33 type=CONTAINER_CREATED_EVENT Jan 28 01:55:12.934820 containerd[1609]: time="2026-01-28T01:55:12.931542491Z" level=info msg="container event discarded" container=97307db3a56847c2b3ea5411d14db48f22041ad8fd6281809277fb982b642a33 type=CONTAINER_STARTED_EVENT Jan 28 01:55:13.442368 containerd[1609]: time="2026-01-28T01:55:13.441915858Z" level=info msg="container event discarded" container=1982c49e22b40b664af3807286ae7acff0aed44e51ce169f153d57ff2c91bb26 type=CONTAINER_CREATED_EVENT Jan 28 01:55:13.627236 kubelet[2967]: E0128 01:55:13.619393 2967 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.979s" Jan 28 01:55:14.672980 containerd[1609]: time="2026-01-28T01:55:14.672447444Z" level=info msg="container event discarded" container=1982c49e22b40b664af3807286ae7acff0aed44e51ce169f153d57ff2c91bb26 type=CONTAINER_STARTED_EVENT Jan 28 01:55:15.054082 kubelet[2967]: E0128 01:55:15.053441 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nv2sz" podUID="be8a6b52-634d-45dc-a492-0c042b64c6df" Jan 28 01:55:15.076357 kubelet[2967]: E0128 01:55:15.070143 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" podUID="ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9" Jan 28 01:55:15.721609 sshd[6804]: Connection closed by 10.0.0.1 port 53316 Jan 28 01:55:15.770391 sshd-session[6781]: pam_unix(sshd:session): session closed for user core Jan 28 01:55:15.901000 audit[6781]: USER_END pid=6781 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:16.145023 kernel: audit: type=1106 audit(1769565315.901:992): pid=6781 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:16.177489 kernel: audit: type=1104 audit(1769565315.901:993): pid=6781 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:15.901000 audit[6781]: CRED_DISP pid=6781 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:16.274872 systemd[1]: sshd@30-10.0.0.85:22-10.0.0.1:53316.service: Deactivated successfully. Jan 28 01:55:16.921359 kernel: audit: type=1131 audit(1769565316.375:994): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.85:22-10.0.0.1:53316 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:55:16.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.85:22-10.0.0.1:53316 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:55:16.996505 systemd[1]: session-32.scope: Deactivated successfully. Jan 28 01:55:17.005076 systemd[1]: session-32.scope: Consumed 1.045s CPU time, 18.6M memory peak. Jan 28 01:55:17.035924 systemd-logind[1586]: Session 32 logged out. Waiting for processes to exit. Jan 28 01:55:17.134780 systemd-logind[1586]: Removed session 32. Jan 28 01:55:17.211454 kubelet[2967]: E0128 01:55:17.187548 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" podUID="3ef171ed-8146-4d6a-9063-eb31677aa1d4" Jan 28 01:55:17.313243 systemd[1]: cri-containerd-1982c49e22b40b664af3807286ae7acff0aed44e51ce169f153d57ff2c91bb26.scope: Deactivated successfully. Jan 28 01:55:17.339000 audit: BPF prog-id=156 op=UNLOAD Jan 28 01:55:17.344115 containerd[1609]: time="2026-01-28T01:55:17.332391483Z" level=info msg="received container exit event container_id:\"1982c49e22b40b664af3807286ae7acff0aed44e51ce169f153d57ff2c91bb26\" id:\"1982c49e22b40b664af3807286ae7acff0aed44e51ce169f153d57ff2c91bb26\" pid:3492 exit_status:1 exited_at:{seconds:1769565317 nanos:330064784}" Jan 28 01:55:17.333634 systemd[1]: cri-containerd-1982c49e22b40b664af3807286ae7acff0aed44e51ce169f153d57ff2c91bb26.scope: Consumed 18.731s CPU time, 79M memory peak, 860K read from disk. Jan 28 01:55:17.357024 kernel: audit: type=1334 audit(1769565317.339:995): prog-id=156 op=UNLOAD Jan 28 01:55:17.339000 audit: BPF prog-id=160 op=UNLOAD Jan 28 01:55:17.375632 kernel: audit: type=1334 audit(1769565317.339:996): prog-id=160 op=UNLOAD Jan 28 01:55:17.458274 containerd[1609]: time="2026-01-28T01:55:17.456868005Z" level=error msg="ExecSync for \"cfe1d753dc41ba4f5abc7edf74b3294df038bf0a1abd52ceb41d9421bf1f7d14\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Jan 28 01:55:17.462804 kubelet[2967]: E0128 01:55:17.461786 2967 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="cfe1d753dc41ba4f5abc7edf74b3294df038bf0a1abd52ceb41d9421bf1f7d14" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Jan 28 01:55:17.681116 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1982c49e22b40b664af3807286ae7acff0aed44e51ce169f153d57ff2c91bb26-rootfs.mount: Deactivated successfully. Jan 28 01:55:18.123919 kubelet[2967]: I0128 01:55:18.123642 2967 scope.go:117] "RemoveContainer" containerID="0bb8b1bd5c821a57e0d0dcb49f9dcb87d6b4e86ef33da9e75d79784c9591c0a9" Jan 28 01:55:18.129546 kubelet[2967]: I0128 01:55:18.128335 2967 scope.go:117] "RemoveContainer" containerID="1982c49e22b40b664af3807286ae7acff0aed44e51ce169f153d57ff2c91bb26" Jan 28 01:55:18.129546 kubelet[2967]: E0128 01:55:18.128557 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-25ww8_tigera-operator(b931dda2-4f4f-40f1-a4a9-4f772efe9eb9)\"" pod="tigera-operator/tigera-operator-7dcd859c48-25ww8" podUID="b931dda2-4f4f-40f1-a4a9-4f772efe9eb9" Jan 28 01:55:18.152523 containerd[1609]: time="2026-01-28T01:55:18.150133983Z" level=info msg="RemoveContainer for \"0bb8b1bd5c821a57e0d0dcb49f9dcb87d6b4e86ef33da9e75d79784c9591c0a9\"" Jan 28 01:55:18.288450 kubelet[2967]: E0128 01:55:18.288288 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb5cb5d8-9zmvs" podUID="f9057416-92cd-485c-b269-9b046834d5f3" Jan 28 01:55:18.505754 containerd[1609]: time="2026-01-28T01:55:18.504300427Z" level=info msg="RemoveContainer for \"0bb8b1bd5c821a57e0d0dcb49f9dcb87d6b4e86ef33da9e75d79784c9591c0a9\" returns successfully" Jan 28 01:55:20.930165 systemd[1]: Started sshd@31-10.0.0.85:22-10.0.0.1:34988.service - OpenSSH per-connection server daemon (10.0.0.1:34988). Jan 28 01:55:21.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.85:22-10.0.0.1:34988 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:55:21.078343 kernel: audit: type=1130 audit(1769565321.017:997): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.85:22-10.0.0.1:34988 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:55:21.227842 kubelet[2967]: E0128 01:55:21.203015 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" podUID="67371941-5272-4e0e-84ef-cf7de9065a57" Jan 28 01:55:21.268130 kubelet[2967]: E0128 01:55:21.267304 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:55:22.008000 audit[6834]: USER_ACCT pid=6834 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:22.016411 sshd[6834]: Accepted publickey for core from 10.0.0.1 port 34988 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:55:22.019042 sshd-session[6834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:55:22.117916 kernel: audit: type=1101 audit(1769565322.008:998): pid=6834 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:22.118085 kernel: audit: type=1103 audit(1769565322.012:999): pid=6834 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:22.012000 audit[6834]: CRED_ACQ pid=6834 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:22.270832 kernel: audit: type=1006 audit(1769565322.012:1000): pid=6834 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=33 res=1 Jan 28 01:55:22.012000 audit[6834]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe347fcb30 a2=3 a3=0 items=0 ppid=1 pid=6834 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:55:22.295277 systemd-logind[1586]: New session 33 of user core. Jan 28 01:55:22.321530 kernel: audit: type=1300 audit(1769565322.012:1000): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe347fcb30 a2=3 a3=0 items=0 ppid=1 pid=6834 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:55:22.321844 kernel: audit: type=1327 audit(1769565322.012:1000): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:55:22.012000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:55:22.335248 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 28 01:55:22.390000 audit[6834]: USER_START pid=6834 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:22.409000 audit[6845]: CRED_ACQ pid=6845 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:22.622020 kernel: audit: type=1105 audit(1769565322.390:1001): pid=6834 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:22.622239 kernel: audit: type=1103 audit(1769565322.409:1002): pid=6845 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:23.748536 sshd[6845]: Connection closed by 10.0.0.1 port 34988 Jan 28 01:55:23.750543 sshd-session[6834]: pam_unix(sshd:session): session closed for user core Jan 28 01:55:23.741000 audit[6834]: USER_END pid=6834 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:23.823062 systemd[1]: sshd@31-10.0.0.85:22-10.0.0.1:34988.service: Deactivated successfully. Jan 28 01:55:23.895554 systemd[1]: session-33.scope: Deactivated successfully. Jan 28 01:55:23.933050 systemd-logind[1586]: Session 33 logged out. Waiting for processes to exit. Jan 28 01:55:23.966996 kernel: audit: type=1106 audit(1769565323.741:1003): pid=6834 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:23.741000 audit[6834]: CRED_DISP pid=6834 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:24.021546 systemd-logind[1586]: Removed session 33. Jan 28 01:55:24.065766 kernel: audit: type=1104 audit(1769565323.741:1004): pid=6834 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:23.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.85:22-10.0.0.1:34988 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:55:24.449648 containerd[1609]: time="2026-01-28T01:55:24.446939175Z" level=info msg="container event discarded" container=009fdc883610089e19b9e1012855e2339327ea36befa6ded8508aac445515df2 type=CONTAINER_CREATED_EVENT Jan 28 01:55:25.137355 containerd[1609]: time="2026-01-28T01:55:25.137253875Z" level=info msg="container event discarded" container=009fdc883610089e19b9e1012855e2339327ea36befa6ded8508aac445515df2 type=CONTAINER_STARTED_EVENT Jan 28 01:55:26.227107 kubelet[2967]: E0128 01:55:26.227047 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" podUID="ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9" Jan 28 01:55:28.828333 systemd[1]: Started sshd@32-10.0.0.85:22-10.0.0.1:49570.service - OpenSSH per-connection server daemon (10.0.0.1:49570). Jan 28 01:55:28.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.85:22-10.0.0.1:49570 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:55:28.876000 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:55:28.876273 kernel: audit: type=1130 audit(1769565328.828:1006): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.85:22-10.0.0.1:49570 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:55:29.196622 kubelet[2967]: E0128 01:55:29.193378 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nv2sz" podUID="be8a6b52-634d-45dc-a492-0c042b64c6df" Jan 28 01:55:29.196622 kubelet[2967]: E0128 01:55:29.194891 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb5cb5d8-9zmvs" podUID="f9057416-92cd-485c-b269-9b046834d5f3" Jan 28 01:55:29.220000 audit[6859]: USER_ACCT pid=6859 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:29.232512 sshd[6859]: Accepted publickey for core from 10.0.0.1 port 49570 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:55:29.258829 sshd-session[6859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:55:29.292756 kernel: audit: type=1101 audit(1769565329.220:1007): pid=6859 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:29.292887 kernel: audit: type=1103 audit(1769565329.233:1008): pid=6859 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:29.233000 audit[6859]: CRED_ACQ pid=6859 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:29.326387 systemd-logind[1586]: New session 34 of user core. Jan 28 01:55:29.364301 kernel: audit: type=1006 audit(1769565329.233:1009): pid=6859 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=34 res=1 Jan 28 01:55:29.233000 audit[6859]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff03a74dd0 a2=3 a3=0 items=0 ppid=1 pid=6859 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:55:29.391857 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 28 01:55:29.233000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:55:29.490519 kernel: audit: type=1300 audit(1769565329.233:1009): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff03a74dd0 a2=3 a3=0 items=0 ppid=1 pid=6859 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:55:29.497619 kernel: audit: type=1327 audit(1769565329.233:1009): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:55:29.497815 kernel: audit: type=1105 audit(1769565329.437:1010): pid=6859 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:29.437000 audit[6859]: USER_START pid=6859 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:29.446000 audit[6863]: CRED_ACQ pid=6863 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:29.618980 kernel: audit: type=1103 audit(1769565329.446:1011): pid=6863 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:30.187585 sshd[6863]: Connection closed by 10.0.0.1 port 49570 Jan 28 01:55:30.187817 sshd-session[6859]: pam_unix(sshd:session): session closed for user core Jan 28 01:55:30.185000 audit[6859]: USER_END pid=6859 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:30.210136 systemd-logind[1586]: Session 34 logged out. Waiting for processes to exit. Jan 28 01:55:30.211646 systemd[1]: sshd@32-10.0.0.85:22-10.0.0.1:49570.service: Deactivated successfully. Jan 28 01:55:30.226070 systemd[1]: session-34.scope: Deactivated successfully. Jan 28 01:55:30.262466 systemd-logind[1586]: Removed session 34. Jan 28 01:55:30.287240 kernel: audit: type=1106 audit(1769565330.185:1012): pid=6859 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:30.185000 audit[6859]: CRED_DISP pid=6859 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:30.357656 kernel: audit: type=1104 audit(1769565330.185:1013): pid=6859 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:30.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.85:22-10.0.0.1:49570 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:55:32.196046 kubelet[2967]: E0128 01:55:32.194434 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" podUID="67371941-5272-4e0e-84ef-cf7de9065a57" Jan 28 01:55:32.196046 kubelet[2967]: E0128 01:55:32.194797 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" podUID="3ef171ed-8146-4d6a-9063-eb31677aa1d4" Jan 28 01:55:33.195333 kubelet[2967]: I0128 01:55:33.186528 2967 scope.go:117] "RemoveContainer" containerID="1982c49e22b40b664af3807286ae7acff0aed44e51ce169f153d57ff2c91bb26" Jan 28 01:55:33.195493 containerd[1609]: time="2026-01-28T01:55:33.194985615Z" level=info msg="CreateContainer within sandbox \"b9d1d348cf0795ea248711c7ef2848f460514adfe68ff32870ab0b42bd3087c2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:2,}" Jan 28 01:55:33.284018 containerd[1609]: time="2026-01-28T01:55:33.282418973Z" level=info msg="Container cf62590eac0870c4409327ec6d0f5706750d4c86c5a6194198a59df102081c48: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:55:33.319741 containerd[1609]: time="2026-01-28T01:55:33.316139355Z" level=info msg="CreateContainer within sandbox \"b9d1d348cf0795ea248711c7ef2848f460514adfe68ff32870ab0b42bd3087c2\" for &ContainerMetadata{Name:tigera-operator,Attempt:2,} returns container id \"cf62590eac0870c4409327ec6d0f5706750d4c86c5a6194198a59df102081c48\"" Jan 28 01:55:33.324550 containerd[1609]: time="2026-01-28T01:55:33.323041727Z" level=info msg="StartContainer for \"cf62590eac0870c4409327ec6d0f5706750d4c86c5a6194198a59df102081c48\"" Jan 28 01:55:33.337596 containerd[1609]: time="2026-01-28T01:55:33.333358425Z" level=info msg="connecting to shim cf62590eac0870c4409327ec6d0f5706750d4c86c5a6194198a59df102081c48" address="unix:///run/containerd/s/d09acdc242401520f6653fb2b4f019199fc6c2a2fe093660366a037d4b219284" protocol=ttrpc version=3 Jan 28 01:55:33.506963 systemd[1]: Started cri-containerd-cf62590eac0870c4409327ec6d0f5706750d4c86c5a6194198a59df102081c48.scope - libcontainer container cf62590eac0870c4409327ec6d0f5706750d4c86c5a6194198a59df102081c48. Jan 28 01:55:33.634000 audit: BPF prog-id=267 op=LOAD Jan 28 01:55:33.636000 audit: BPF prog-id=268 op=LOAD Jan 28 01:55:33.636000 audit[6898]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=3164 pid=6898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:55:33.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366363235393065616330383730633434303933323765633664306635 Jan 28 01:55:33.640000 audit: BPF prog-id=268 op=UNLOAD Jan 28 01:55:33.640000 audit[6898]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3164 pid=6898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:55:33.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366363235393065616330383730633434303933323765633664306635 Jan 28 01:55:33.658000 audit: BPF prog-id=269 op=LOAD Jan 28 01:55:33.658000 audit[6898]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3164 pid=6898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:55:33.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366363235393065616330383730633434303933323765633664306635 Jan 28 01:55:33.658000 audit: BPF prog-id=270 op=LOAD Jan 28 01:55:33.658000 audit[6898]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=3164 pid=6898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:55:33.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366363235393065616330383730633434303933323765633664306635 Jan 28 01:55:33.658000 audit: BPF prog-id=270 op=UNLOAD Jan 28 01:55:33.658000 audit[6898]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3164 pid=6898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:55:33.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366363235393065616330383730633434303933323765633664306635 Jan 28 01:55:33.658000 audit: BPF prog-id=269 op=UNLOAD Jan 28 01:55:33.658000 audit[6898]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3164 pid=6898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:55:33.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366363235393065616330383730633434303933323765633664306635 Jan 28 01:55:33.658000 audit: BPF prog-id=271 op=LOAD Jan 28 01:55:33.658000 audit[6898]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=3164 pid=6898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:55:33.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366363235393065616330383730633434303933323765633664306635 Jan 28 01:55:34.004744 containerd[1609]: time="2026-01-28T01:55:34.004592277Z" level=info msg="StartContainer for \"cf62590eac0870c4409327ec6d0f5706750d4c86c5a6194198a59df102081c48\" returns successfully" Jan 28 01:55:35.192981 kubelet[2967]: E0128 01:55:35.192924 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:55:35.229906 systemd[1]: Started sshd@33-10.0.0.85:22-10.0.0.1:46680.service - OpenSSH per-connection server daemon (10.0.0.1:46680). Jan 28 01:55:35.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.85:22-10.0.0.1:46680 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:55:35.240035 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 28 01:55:35.240194 kernel: audit: type=1130 audit(1769565335.224:1023): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.85:22-10.0.0.1:46680 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:55:35.694065 sshd[6935]: Accepted publickey for core from 10.0.0.1 port 46680 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:55:35.692000 audit[6935]: USER_ACCT pid=6935 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:35.700347 sshd-session[6935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:55:35.697000 audit[6935]: CRED_ACQ pid=6935 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:35.734502 systemd-logind[1586]: New session 35 of user core. Jan 28 01:55:35.777272 kernel: audit: type=1101 audit(1769565335.692:1024): pid=6935 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:35.777441 kernel: audit: type=1103 audit(1769565335.697:1025): pid=6935 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:35.777498 kernel: audit: type=1006 audit(1769565335.697:1026): pid=6935 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=35 res=1 Jan 28 01:55:35.697000 audit[6935]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3c986120 a2=3 a3=0 items=0 ppid=1 pid=6935 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:55:35.834939 kernel: audit: type=1300 audit(1769565335.697:1026): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3c986120 a2=3 a3=0 items=0 ppid=1 pid=6935 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:55:35.697000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:55:35.860038 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 28 01:55:35.866313 kernel: audit: type=1327 audit(1769565335.697:1026): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:55:35.905000 audit[6935]: USER_START pid=6935 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:35.929000 audit[6940]: CRED_ACQ pid=6940 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:36.070806 kernel: audit: type=1105 audit(1769565335.905:1027): pid=6935 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:36.070965 kernel: audit: type=1103 audit(1769565335.929:1028): pid=6940 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:36.219979 kubelet[2967]: E0128 01:55:36.219828 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:55:36.832643 sshd[6940]: Connection closed by 10.0.0.1 port 46680 Jan 28 01:55:36.852504 sshd-session[6935]: pam_unix(sshd:session): session closed for user core Jan 28 01:55:36.873000 audit[6935]: USER_END pid=6935 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:36.908590 kernel: audit: type=1106 audit(1769565336.873:1029): pid=6935 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:36.891000 audit[6935]: CRED_DISP pid=6935 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:36.926765 kernel: audit: type=1104 audit(1769565336.891:1030): pid=6935 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:36.942545 systemd[1]: sshd@33-10.0.0.85:22-10.0.0.1:46680.service: Deactivated successfully. Jan 28 01:55:36.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.85:22-10.0.0.1:46680 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:55:36.960259 systemd-logind[1586]: Session 35 logged out. Waiting for processes to exit. Jan 28 01:55:36.965369 systemd[1]: session-35.scope: Deactivated successfully. Jan 28 01:55:36.990389 systemd-logind[1586]: Removed session 35. Jan 28 01:55:40.212592 kubelet[2967]: E0128 01:55:40.206418 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" podUID="ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9" Jan 28 01:55:41.188975 kubelet[2967]: E0128 01:55:41.188837 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nv2sz" podUID="be8a6b52-634d-45dc-a492-0c042b64c6df" Jan 28 01:55:41.912118 systemd[1]: Started sshd@34-10.0.0.85:22-10.0.0.1:46682.service - OpenSSH per-connection server daemon (10.0.0.1:46682). Jan 28 01:55:41.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.85:22-10.0.0.1:46682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:55:41.918901 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:55:41.919017 kernel: audit: type=1130 audit(1769565341.910:1032): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.85:22-10.0.0.1:46682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:55:42.316000 audit[6955]: USER_ACCT pid=6955 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:42.331635 sshd[6955]: Accepted publickey for core from 10.0.0.1 port 46682 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:55:42.343348 sshd-session[6955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:55:42.389767 kernel: audit: type=1101 audit(1769565342.316:1033): pid=6955 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:42.389975 kernel: audit: type=1103 audit(1769565342.325:1034): pid=6955 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:42.325000 audit[6955]: CRED_ACQ pid=6955 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:42.399332 systemd-logind[1586]: New session 36 of user core. Jan 28 01:55:42.465475 kernel: audit: type=1006 audit(1769565342.325:1035): pid=6955 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=36 res=1 Jan 28 01:55:42.325000 audit[6955]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdeb7a8670 a2=3 a3=0 items=0 ppid=1 pid=6955 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=36 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:55:42.505791 kernel: audit: type=1300 audit(1769565342.325:1035): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdeb7a8670 a2=3 a3=0 items=0 ppid=1 pid=6955 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=36 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:55:42.506329 kernel: audit: type=1327 audit(1769565342.325:1035): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:55:42.325000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:55:42.520743 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 28 01:55:42.590000 audit[6955]: USER_START pid=6955 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:42.671091 kernel: audit: type=1105 audit(1769565342.590:1036): pid=6955 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:42.671397 kernel: audit: type=1103 audit(1769565342.632:1037): pid=6961 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:42.632000 audit[6961]: CRED_ACQ pid=6961 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:43.195940 kubelet[2967]: E0128 01:55:43.195792 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:55:43.669739 sshd[6961]: Connection closed by 10.0.0.1 port 46682 Jan 28 01:55:43.672759 sshd-session[6955]: pam_unix(sshd:session): session closed for user core Jan 28 01:55:43.684000 audit[6955]: USER_END pid=6955 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:43.728073 systemd[1]: sshd@34-10.0.0.85:22-10.0.0.1:46682.service: Deactivated successfully. Jan 28 01:55:43.754933 systemd[1]: session-36.scope: Deactivated successfully. Jan 28 01:55:43.688000 audit[6955]: CRED_DISP pid=6955 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:43.773120 systemd-logind[1586]: Session 36 logged out. Waiting for processes to exit. Jan 28 01:55:43.788920 systemd-logind[1586]: Removed session 36. Jan 28 01:55:43.854171 kernel: audit: type=1106 audit(1769565343.684:1038): pid=6955 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:43.854333 kernel: audit: type=1104 audit(1769565343.688:1039): pid=6955 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:43.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.85:22-10.0.0.1:46682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:55:44.203488 kubelet[2967]: E0128 01:55:44.202759 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" podUID="3ef171ed-8146-4d6a-9063-eb31677aa1d4" Jan 28 01:55:44.217882 kubelet[2967]: E0128 01:55:44.217074 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb5cb5d8-9zmvs" podUID="f9057416-92cd-485c-b269-9b046834d5f3" Jan 28 01:55:45.190113 kubelet[2967]: E0128 01:55:45.189993 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" podUID="67371941-5272-4e0e-84ef-cf7de9065a57" Jan 28 01:55:48.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.85:22-10.0.0.1:49840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:55:48.735088 systemd[1]: Started sshd@35-10.0.0.85:22-10.0.0.1:49840.service - OpenSSH per-connection server daemon (10.0.0.1:49840). Jan 28 01:55:48.762114 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:55:48.762342 kernel: audit: type=1130 audit(1769565348.733:1041): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.85:22-10.0.0.1:49840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:55:49.122000 audit[6975]: USER_ACCT pid=6975 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:49.124512 sshd[6975]: Accepted publickey for core from 10.0.0.1 port 49840 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:55:49.131918 sshd-session[6975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:55:49.161753 kernel: audit: type=1101 audit(1769565349.122:1042): pid=6975 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:49.125000 audit[6975]: CRED_ACQ pid=6975 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:49.198819 systemd-logind[1586]: New session 37 of user core. Jan 28 01:55:49.202773 kubelet[2967]: E0128 01:55:49.202511 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:55:49.224868 kernel: audit: type=1103 audit(1769565349.125:1043): pid=6975 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:49.225017 kernel: audit: type=1006 audit(1769565349.125:1044): pid=6975 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=37 res=1 Jan 28 01:55:49.225062 kernel: audit: type=1300 audit(1769565349.125:1044): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff1d811da0 a2=3 a3=0 items=0 ppid=1 pid=6975 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=37 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:55:49.125000 audit[6975]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff1d811da0 a2=3 a3=0 items=0 ppid=1 pid=6975 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=37 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:55:49.125000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:55:49.290249 kernel: audit: type=1327 audit(1769565349.125:1044): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:55:49.293614 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 28 01:55:49.305000 audit[6975]: USER_START pid=6975 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:49.367454 kernel: audit: type=1105 audit(1769565349.305:1045): pid=6975 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:49.367642 kernel: audit: type=1103 audit(1769565349.329:1046): pid=6979 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:49.329000 audit[6979]: CRED_ACQ pid=6979 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:49.934984 sshd[6979]: Connection closed by 10.0.0.1 port 49840 Jan 28 01:55:49.934453 sshd-session[6975]: pam_unix(sshd:session): session closed for user core Jan 28 01:55:49.942000 audit[6975]: USER_END pid=6975 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:49.975461 systemd[1]: sshd@35-10.0.0.85:22-10.0.0.1:49840.service: Deactivated successfully. Jan 28 01:55:49.996883 systemd[1]: session-37.scope: Deactivated successfully. Jan 28 01:55:50.032742 systemd-logind[1586]: Session 37 logged out. Waiting for processes to exit. Jan 28 01:55:50.051607 systemd-logind[1586]: Removed session 37. Jan 28 01:55:49.943000 audit[6975]: CRED_DISP pid=6975 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:50.141339 kernel: audit: type=1106 audit(1769565349.942:1047): pid=6975 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:50.147365 kernel: audit: type=1104 audit(1769565349.943:1048): pid=6975 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:49.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.85:22-10.0.0.1:49840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:55:51.190762 kubelet[2967]: E0128 01:55:51.190077 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:55:51.192388 kubelet[2967]: E0128 01:55:51.192304 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:55:51.199335 kubelet[2967]: E0128 01:55:51.195182 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" podUID="ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9" Jan 28 01:55:54.212823 kubelet[2967]: E0128 01:55:54.205754 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nv2sz" podUID="be8a6b52-634d-45dc-a492-0c042b64c6df" Jan 28 01:55:55.144444 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:55:55.144609 kernel: audit: type=1130 audit(1769565355.120:1050): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.85:22-10.0.0.1:43752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:55:55.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.85:22-10.0.0.1:43752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:55:55.124796 systemd[1]: Started sshd@36-10.0.0.85:22-10.0.0.1:43752.service - OpenSSH per-connection server daemon (10.0.0.1:43752). Jan 28 01:55:55.870000 audit[6995]: USER_ACCT pid=6995 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:55.906056 sshd[6995]: Accepted publickey for core from 10.0.0.1 port 43752 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:55:55.916352 sshd-session[6995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:55:55.908000 audit[6995]: CRED_ACQ pid=6995 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:56.014778 kernel: audit: type=1101 audit(1769565355.870:1051): pid=6995 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:56.014915 kernel: audit: type=1103 audit(1769565355.908:1052): pid=6995 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:56.029860 kernel: audit: type=1006 audit(1769565355.908:1053): pid=6995 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=38 res=1 Jan 28 01:55:56.045925 kernel: audit: type=1300 audit(1769565355.908:1053): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea86e2c80 a2=3 a3=0 items=0 ppid=1 pid=6995 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=38 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:55:55.908000 audit[6995]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea86e2c80 a2=3 a3=0 items=0 ppid=1 pid=6995 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=38 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:55:56.040520 systemd-logind[1586]: New session 38 of user core. Jan 28 01:55:55.908000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:55:56.159350 kernel: audit: type=1327 audit(1769565355.908:1053): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:55:56.177538 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 28 01:55:56.208578 kubelet[2967]: E0128 01:55:56.204460 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" podUID="3ef171ed-8146-4d6a-9063-eb31677aa1d4" Jan 28 01:55:56.206000 audit[6995]: USER_START pid=6995 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:56.331575 kernel: audit: type=1105 audit(1769565356.206:1054): pid=6995 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:56.331806 kernel: audit: type=1103 audit(1769565356.215:1055): pid=6999 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:56.215000 audit[6999]: CRED_ACQ pid=6999 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:56.960802 sshd[6999]: Connection closed by 10.0.0.1 port 43752 Jan 28 01:55:56.963940 sshd-session[6995]: pam_unix(sshd:session): session closed for user core Jan 28 01:55:56.983000 audit[6995]: USER_END pid=6995 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:57.003489 systemd[1]: sshd@36-10.0.0.85:22-10.0.0.1:43752.service: Deactivated successfully. Jan 28 01:55:57.018110 systemd[1]: session-38.scope: Deactivated successfully. Jan 28 01:55:57.039035 systemd-logind[1586]: Session 38 logged out. Waiting for processes to exit. Jan 28 01:55:56.984000 audit[6995]: CRED_DISP pid=6995 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:57.049428 systemd-logind[1586]: Removed session 38. Jan 28 01:55:57.080249 kernel: audit: type=1106 audit(1769565356.983:1056): pid=6995 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:57.080438 kernel: audit: type=1104 audit(1769565356.984:1057): pid=6995 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:55:57.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.85:22-10.0.0.1:43752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:55:58.240359 kubelet[2967]: E0128 01:55:58.239876 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb5cb5d8-9zmvs" podUID="f9057416-92cd-485c-b269-9b046834d5f3" Jan 28 01:55:59.194493 kubelet[2967]: E0128 01:55:59.193980 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" podUID="67371941-5272-4e0e-84ef-cf7de9065a57" Jan 28 01:56:01.209632 kubelet[2967]: E0128 01:56:01.202824 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:56:02.202044 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:56:02.202292 kernel: audit: type=1130 audit(1769565362.124:1059): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-10.0.0.85:22-10.0.0.1:43768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:56:02.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-10.0.0.85:22-10.0.0.1:43768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:56:02.132001 systemd[1]: Started sshd@37-10.0.0.85:22-10.0.0.1:43768.service - OpenSSH per-connection server daemon (10.0.0.1:43768). Jan 28 01:56:02.223564 kubelet[2967]: E0128 01:56:02.223508 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" podUID="ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9" Jan 28 01:56:02.874000 audit[7046]: USER_ACCT pid=7046 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:02.922054 kernel: audit: type=1101 audit(1769565362.874:1060): pid=7046 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:02.894874 sshd-session[7046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:56:02.922662 sshd[7046]: Accepted publickey for core from 10.0.0.1 port 43768 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:56:02.892000 audit[7046]: CRED_ACQ pid=7046 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:02.932553 systemd-logind[1586]: New session 39 of user core. Jan 28 01:56:02.949749 kernel: audit: type=1103 audit(1769565362.892:1061): pid=7046 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:02.949910 kernel: audit: type=1006 audit(1769565362.892:1062): pid=7046 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=39 res=1 Jan 28 01:56:02.892000 audit[7046]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcbf5a92e0 a2=3 a3=0 items=0 ppid=1 pid=7046 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=39 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:56:02.892000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:56:03.033882 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 28 01:56:03.057586 kernel: audit: type=1300 audit(1769565362.892:1062): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcbf5a92e0 a2=3 a3=0 items=0 ppid=1 pid=7046 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=39 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:56:03.057771 kernel: audit: type=1327 audit(1769565362.892:1062): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:56:03.095000 audit[7046]: USER_START pid=7046 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:03.144815 kernel: audit: type=1105 audit(1769565363.095:1063): pid=7046 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:03.106000 audit[7050]: CRED_ACQ pid=7050 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:03.204035 kernel: audit: type=1103 audit(1769565363.106:1064): pid=7050 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:03.232622 kubelet[2967]: E0128 01:56:03.232358 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:56:03.993451 sshd[7050]: Connection closed by 10.0.0.1 port 43768 Jan 28 01:56:03.996963 sshd-session[7046]: pam_unix(sshd:session): session closed for user core Jan 28 01:56:03.997000 audit[7046]: USER_END pid=7046 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:04.025453 systemd[1]: sshd@37-10.0.0.85:22-10.0.0.1:43768.service: Deactivated successfully. Jan 28 01:56:04.040774 systemd[1]: session-39.scope: Deactivated successfully. Jan 28 01:56:04.089798 kernel: audit: type=1106 audit(1769565363.997:1065): pid=7046 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:04.089941 kernel: audit: type=1104 audit(1769565363.997:1066): pid=7046 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:03.997000 audit[7046]: CRED_DISP pid=7046 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:04.084038 systemd-logind[1586]: Session 39 logged out. Waiting for processes to exit. Jan 28 01:56:04.093229 systemd-logind[1586]: Removed session 39. Jan 28 01:56:04.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-10.0.0.85:22-10.0.0.1:43768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:56:06.273009 containerd[1609]: time="2026-01-28T01:56:06.267597622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:56:06.294470 containerd[1609]: time="2026-01-28T01:56:06.294378473Z" level=info msg="container event discarded" container=2f10dc0975b1cd21acae00f371fed84998a86edf5382e1bd3d0830c0022baa2c type=CONTAINER_STOPPED_EVENT Jan 28 01:56:06.413873 containerd[1609]: time="2026-01-28T01:56:06.411861926Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:56:06.433494 containerd[1609]: time="2026-01-28T01:56:06.433413514Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:56:06.434098 containerd[1609]: time="2026-01-28T01:56:06.433938227Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 01:56:06.435553 kubelet[2967]: E0128 01:56:06.434849 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:56:06.435553 kubelet[2967]: E0128 01:56:06.434918 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:56:06.435553 kubelet[2967]: E0128 01:56:06.435178 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w48wh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nv2sz_calico-system(be8a6b52-634d-45dc-a492-0c042b64c6df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:56:06.456741 kubelet[2967]: E0128 01:56:06.455511 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nv2sz" podUID="be8a6b52-634d-45dc-a492-0c042b64c6df" Jan 28 01:56:07.812263 containerd[1609]: time="2026-01-28T01:56:07.809756560Z" level=info msg="container event discarded" container=618185ebd88995219edd740c485ef90d5784397e4a0beffe20265a36503b8516 type=CONTAINER_CREATED_EVENT Jan 28 01:56:08.306498 kubelet[2967]: E0128 01:56:08.303780 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" podUID="3ef171ed-8146-4d6a-9063-eb31677aa1d4" Jan 28 01:56:08.829476 containerd[1609]: time="2026-01-28T01:56:08.829405063Z" level=info msg="container event discarded" container=618185ebd88995219edd740c485ef90d5784397e4a0beffe20265a36503b8516 type=CONTAINER_STARTED_EVENT Jan 28 01:56:09.028930 systemd[1]: Started sshd@38-10.0.0.85:22-10.0.0.1:49676.service - OpenSSH per-connection server daemon (10.0.0.1:49676). Jan 28 01:56:09.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-10.0.0.85:22-10.0.0.1:49676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:56:09.041394 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:56:09.041556 kernel: audit: type=1130 audit(1769565369.028:1068): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-10.0.0.85:22-10.0.0.1:49676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:56:09.565000 audit[7063]: USER_ACCT pid=7063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:09.639949 sshd[7063]: Accepted publickey for core from 10.0.0.1 port 49676 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:56:09.657600 kernel: audit: type=1101 audit(1769565369.565:1069): pid=7063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:09.659000 audit[7063]: CRED_ACQ pid=7063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:09.690181 sshd-session[7063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:56:09.702345 kernel: audit: type=1103 audit(1769565369.659:1070): pid=7063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:09.703316 kernel: audit: type=1006 audit(1769565369.659:1071): pid=7063 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=40 res=1 Jan 28 01:56:09.659000 audit[7063]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff778a7480 a2=3 a3=0 items=0 ppid=1 pid=7063 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=40 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:56:09.762449 kernel: audit: type=1300 audit(1769565369.659:1071): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff778a7480 a2=3 a3=0 items=0 ppid=1 pid=7063 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=40 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:56:09.762640 kernel: audit: type=1327 audit(1769565369.659:1071): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:56:09.659000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:56:09.767597 systemd-logind[1586]: New session 40 of user core. Jan 28 01:56:09.807036 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 28 01:56:09.861000 audit[7063]: USER_START pid=7063 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:09.901000 audit[7067]: CRED_ACQ pid=7067 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:10.073470 kernel: audit: type=1105 audit(1769565369.861:1072): pid=7063 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:10.073660 kernel: audit: type=1103 audit(1769565369.901:1073): pid=7067 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:10.198214 kubelet[2967]: E0128 01:56:10.192450 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:56:10.860461 sshd[7067]: Connection closed by 10.0.0.1 port 49676 Jan 28 01:56:10.861494 sshd-session[7063]: pam_unix(sshd:session): session closed for user core Jan 28 01:56:10.868000 audit[7063]: USER_END pid=7063 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:10.985856 kernel: audit: type=1106 audit(1769565370.868:1074): pid=7063 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:10.868000 audit[7063]: CRED_DISP pid=7063 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:11.077948 kernel: audit: type=1104 audit(1769565370.868:1075): pid=7063 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:10.990299 systemd[1]: sshd@38-10.0.0.85:22-10.0.0.1:49676.service: Deactivated successfully. Jan 28 01:56:10.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-10.0.0.85:22-10.0.0.1:49676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:56:11.043411 systemd[1]: session-40.scope: Deactivated successfully. Jan 28 01:56:11.072246 systemd-logind[1586]: Session 40 logged out. Waiting for processes to exit. Jan 28 01:56:11.083942 systemd-logind[1586]: Removed session 40. Jan 28 01:56:12.234058 containerd[1609]: time="2026-01-28T01:56:12.233965518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:56:12.421928 containerd[1609]: time="2026-01-28T01:56:12.421323359Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:56:12.434522 containerd[1609]: time="2026-01-28T01:56:12.429841296Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:56:12.434522 containerd[1609]: time="2026-01-28T01:56:12.429993470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 01:56:12.434522 containerd[1609]: time="2026-01-28T01:56:12.431577307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:56:12.434878 kubelet[2967]: E0128 01:56:12.430364 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:56:12.434878 kubelet[2967]: E0128 01:56:12.430423 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:56:12.434878 kubelet[2967]: E0128 01:56:12.430743 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:11f2d6a54a3d467fbd60c4526f82d473,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2z9qq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7fb5cb5d8-9zmvs_calico-system(f9057416-92cd-485c-b269-9b046834d5f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:56:12.622843 containerd[1609]: time="2026-01-28T01:56:12.618785112Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:56:12.633854 containerd[1609]: time="2026-01-28T01:56:12.633599739Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:56:12.633854 containerd[1609]: time="2026-01-28T01:56:12.633804230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 01:56:12.637973 kubelet[2967]: E0128 01:56:12.637910 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:56:12.638241 kubelet[2967]: E0128 01:56:12.638205 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:56:12.638799 kubelet[2967]: E0128 01:56:12.638733 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4tgn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-849fc56f8-v9sqx_calico-system(67371941-5272-4e0e-84ef-cf7de9065a57): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:56:12.644320 kubelet[2967]: E0128 01:56:12.644032 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" podUID="67371941-5272-4e0e-84ef-cf7de9065a57" Jan 28 01:56:12.663191 containerd[1609]: time="2026-01-28T01:56:12.660923002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:56:12.889020 containerd[1609]: time="2026-01-28T01:56:12.888107043Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:56:12.898441 containerd[1609]: time="2026-01-28T01:56:12.898263607Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:56:12.898441 containerd[1609]: time="2026-01-28T01:56:12.898406723Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 01:56:12.904259 kubelet[2967]: E0128 01:56:12.900278 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:56:12.904259 kubelet[2967]: E0128 01:56:12.900380 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:56:12.904259 kubelet[2967]: E0128 01:56:12.900543 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2z9qq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7fb5cb5d8-9zmvs_calico-system(f9057416-92cd-485c-b269-9b046834d5f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:56:12.904259 kubelet[2967]: E0128 01:56:12.902258 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb5cb5d8-9zmvs" podUID="f9057416-92cd-485c-b269-9b046834d5f3" Jan 28 01:56:15.950347 systemd[1]: Started sshd@39-10.0.0.85:22-10.0.0.1:44078.service - OpenSSH per-connection server daemon (10.0.0.1:44078). Jan 28 01:56:15.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-10.0.0.85:22-10.0.0.1:44078 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:56:16.011380 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:56:16.011595 kernel: audit: type=1130 audit(1769565375.949:1077): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-10.0.0.85:22-10.0.0.1:44078 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:56:16.282910 containerd[1609]: time="2026-01-28T01:56:16.279765404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:56:16.384442 containerd[1609]: time="2026-01-28T01:56:16.383802718Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:56:16.410007 containerd[1609]: time="2026-01-28T01:56:16.409080674Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:56:16.410007 containerd[1609]: time="2026-01-28T01:56:16.409514693Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:56:16.411657 kubelet[2967]: E0128 01:56:16.410857 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:56:16.411657 kubelet[2967]: E0128 01:56:16.410914 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:56:16.411657 kubelet[2967]: E0128 01:56:16.411082 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jjkwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-654b4ddbfd-mbn64_calico-apiserver(ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:56:16.411000 audit[7084]: USER_ACCT pid=7084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:16.427605 sshd-session[7084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:56:16.431103 kubelet[2967]: E0128 01:56:16.416067 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" podUID="ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9" Jan 28 01:56:16.431212 sshd[7084]: Accepted publickey for core from 10.0.0.1 port 44078 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:56:16.481172 systemd-logind[1586]: New session 41 of user core. Jan 28 01:56:16.486598 kernel: audit: type=1101 audit(1769565376.411:1078): pid=7084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:16.417000 audit[7084]: CRED_ACQ pid=7084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:16.499092 containerd[1609]: time="2026-01-28T01:56:16.496529239Z" level=info msg="container event discarded" container=03c4c464ce884579f86c8423c1f1c099c051c3b727b3c1b00c231655d3b90b5e type=CONTAINER_CREATED_EVENT Jan 28 01:56:16.499092 containerd[1609]: time="2026-01-28T01:56:16.496796257Z" level=info msg="container event discarded" container=03c4c464ce884579f86c8423c1f1c099c051c3b727b3c1b00c231655d3b90b5e type=CONTAINER_STARTED_EVENT Jan 28 01:56:16.552094 kernel: audit: type=1103 audit(1769565376.417:1079): pid=7084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:16.552452 kernel: audit: type=1006 audit(1769565376.417:1080): pid=7084 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=41 res=1 Jan 28 01:56:16.417000 audit[7084]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1a2ba7b0 a2=3 a3=0 items=0 ppid=1 pid=7084 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=41 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:56:16.619794 kernel: audit: type=1300 audit(1769565376.417:1080): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1a2ba7b0 a2=3 a3=0 items=0 ppid=1 pid=7084 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=41 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:56:16.417000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:56:16.625791 kernel: audit: type=1327 audit(1769565376.417:1080): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:56:16.648351 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 28 01:56:16.673000 audit[7084]: USER_START pid=7084 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:16.685000 audit[7088]: CRED_ACQ pid=7088 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:16.777762 kernel: audit: type=1105 audit(1769565376.673:1081): pid=7084 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:16.777929 kernel: audit: type=1103 audit(1769565376.685:1082): pid=7088 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:17.211230 containerd[1609]: time="2026-01-28T01:56:17.210842349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:56:17.490542 containerd[1609]: time="2026-01-28T01:56:17.484621261Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:56:17.508726 containerd[1609]: time="2026-01-28T01:56:17.507246300Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:56:17.508726 containerd[1609]: time="2026-01-28T01:56:17.507405166Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 01:56:17.508901 kubelet[2967]: E0128 01:56:17.508620 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:56:17.508901 kubelet[2967]: E0128 01:56:17.508766 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:56:17.509565 kubelet[2967]: E0128 01:56:17.508921 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-882zm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ms9md_calico-system(d33e070d-1851-4242-98ee-97e68b203245): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:56:17.528044 containerd[1609]: time="2026-01-28T01:56:17.520436896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:56:17.658563 sshd[7088]: Connection closed by 10.0.0.1 port 44078 Jan 28 01:56:17.660959 sshd-session[7084]: pam_unix(sshd:session): session closed for user core Jan 28 01:56:17.662252 containerd[1609]: time="2026-01-28T01:56:17.661970155Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:56:17.669597 containerd[1609]: time="2026-01-28T01:56:17.669398453Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:56:17.674103 containerd[1609]: time="2026-01-28T01:56:17.673092030Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 01:56:17.675381 kubelet[2967]: E0128 01:56:17.675068 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:56:17.675381 kubelet[2967]: E0128 01:56:17.675187 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:56:17.675381 kubelet[2967]: E0128 01:56:17.675344 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-882zm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ms9md_calico-system(d33e070d-1851-4242-98ee-97e68b203245): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:56:17.676000 audit[7084]: USER_END pid=7084 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:17.681221 kubelet[2967]: E0128 01:56:17.679413 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:56:17.690761 systemd[1]: sshd@39-10.0.0.85:22-10.0.0.1:44078.service: Deactivated successfully. Jan 28 01:56:17.710546 systemd[1]: session-41.scope: Deactivated successfully. Jan 28 01:56:17.717471 systemd-logind[1586]: Session 41 logged out. Waiting for processes to exit. Jan 28 01:56:17.728093 kernel: audit: type=1106 audit(1769565377.676:1083): pid=7084 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:17.676000 audit[7084]: CRED_DISP pid=7084 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:17.752744 kernel: audit: type=1104 audit(1769565377.676:1084): pid=7084 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:17.736019 systemd-logind[1586]: Removed session 41. Jan 28 01:56:17.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-10.0.0.85:22-10.0.0.1:44078 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:56:18.191515 kubelet[2967]: E0128 01:56:18.186853 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:56:18.862499 containerd[1609]: time="2026-01-28T01:56:18.862398334Z" level=info msg="container event discarded" container=f2f9c9de0f2f74607cb005baa83d50286b86ce507ab0f38859b199bcfb1c6d3f type=CONTAINER_CREATED_EVENT Jan 28 01:56:19.895554 containerd[1609]: time="2026-01-28T01:56:19.895424510Z" level=info msg="container event discarded" container=f2f9c9de0f2f74607cb005baa83d50286b86ce507ab0f38859b199bcfb1c6d3f type=CONTAINER_STARTED_EVENT Jan 28 01:56:21.094541 containerd[1609]: time="2026-01-28T01:56:21.094037211Z" level=info msg="container event discarded" container=f2f9c9de0f2f74607cb005baa83d50286b86ce507ab0f38859b199bcfb1c6d3f type=CONTAINER_STOPPED_EVENT Jan 28 01:56:21.839538 kubelet[2967]: E0128 01:56:21.833099 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:56:21.886604 kubelet[2967]: E0128 01:56:21.886540 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nv2sz" podUID="be8a6b52-634d-45dc-a492-0c042b64c6df" Jan 28 01:56:21.930176 containerd[1609]: time="2026-01-28T01:56:21.929455425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:56:22.086740 containerd[1609]: time="2026-01-28T01:56:22.078918392Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:56:22.108620 containerd[1609]: time="2026-01-28T01:56:22.095900517Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:56:22.108620 containerd[1609]: time="2026-01-28T01:56:22.096051238Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:56:22.109308 kubelet[2967]: E0128 01:56:22.101286 2967 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:56:22.109308 kubelet[2967]: E0128 01:56:22.101350 2967 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:56:22.109308 kubelet[2967]: E0128 01:56:22.101495 2967 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2rq4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-654b4ddbfd-mgclm_calico-apiserver(3ef171ed-8146-4d6a-9063-eb31677aa1d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:56:22.109308 kubelet[2967]: E0128 01:56:22.103872 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" podUID="3ef171ed-8146-4d6a-9063-eb31677aa1d4" Jan 28 01:56:22.696818 systemd[1]: Started sshd@40-10.0.0.85:22-10.0.0.1:48554.service - OpenSSH per-connection server daemon (10.0.0.1:48554). Jan 28 01:56:22.716878 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:56:22.717006 kernel: audit: type=1130 audit(1769565382.694:1086): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-10.0.0.85:22-10.0.0.1:48554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:56:22.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-10.0.0.85:22-10.0.0.1:48554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:56:22.997000 audit[7115]: USER_ACCT pid=7115 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:23.003257 sshd[7115]: Accepted publickey for core from 10.0.0.1 port 48554 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:56:23.006950 sshd-session[7115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:56:23.003000 audit[7115]: CRED_ACQ pid=7115 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:23.038291 kernel: audit: type=1101 audit(1769565382.997:1087): pid=7115 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:23.038573 kernel: audit: type=1103 audit(1769565383.003:1088): pid=7115 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:23.060084 systemd-logind[1586]: New session 42 of user core. Jan 28 01:56:23.089334 kernel: audit: type=1006 audit(1769565383.003:1089): pid=7115 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=42 res=1 Jan 28 01:56:23.003000 audit[7115]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc9f298ba0 a2=3 a3=0 items=0 ppid=1 pid=7115 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=42 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:56:23.102761 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 28 01:56:23.135523 kernel: audit: type=1300 audit(1769565383.003:1089): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc9f298ba0 a2=3 a3=0 items=0 ppid=1 pid=7115 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=42 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:56:23.135651 kernel: audit: type=1327 audit(1769565383.003:1089): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:56:23.003000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:56:23.115000 audit[7115]: USER_START pid=7115 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:23.211372 kernel: audit: type=1105 audit(1769565383.115:1090): pid=7115 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:23.211522 kernel: audit: type=1103 audit(1769565383.126:1091): pid=7119 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:23.126000 audit[7119]: CRED_ACQ pid=7119 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:23.226555 kubelet[2967]: E0128 01:56:23.218973 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" podUID="67371941-5272-4e0e-84ef-cf7de9065a57" Jan 28 01:56:23.643537 sshd[7119]: Connection closed by 10.0.0.1 port 48554 Jan 28 01:56:23.643308 sshd-session[7115]: pam_unix(sshd:session): session closed for user core Jan 28 01:56:23.652000 audit[7115]: USER_END pid=7115 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:23.652000 audit[7115]: CRED_DISP pid=7115 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:23.690415 systemd[1]: sshd@40-10.0.0.85:22-10.0.0.1:48554.service: Deactivated successfully. Jan 28 01:56:23.695542 kernel: audit: type=1106 audit(1769565383.652:1092): pid=7115 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:23.695646 kernel: audit: type=1104 audit(1769565383.652:1093): pid=7115 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:23.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-10.0.0.85:22-10.0.0.1:48554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:56:23.703858 systemd[1]: session-42.scope: Deactivated successfully. Jan 28 01:56:23.712531 systemd-logind[1586]: Session 42 logged out. Waiting for processes to exit. Jan 28 01:56:23.714794 systemd-logind[1586]: Removed session 42. Jan 28 01:56:26.326994 kubelet[2967]: E0128 01:56:26.323423 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb5cb5d8-9zmvs" podUID="f9057416-92cd-485c-b269-9b046834d5f3" Jan 28 01:56:28.240216 kubelet[2967]: E0128 01:56:28.239501 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:56:28.825475 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:56:28.825765 kernel: audit: type=1130 audit(1769565388.802:1095): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-10.0.0.85:22-10.0.0.1:48556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:56:28.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-10.0.0.85:22-10.0.0.1:48556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:56:28.809308 systemd[1]: Started sshd@41-10.0.0.85:22-10.0.0.1:48556.service - OpenSSH per-connection server daemon (10.0.0.1:48556). Jan 28 01:56:29.228000 audit[7139]: USER_ACCT pid=7139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:29.269650 sshd-session[7139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:56:29.283348 sshd[7139]: Accepted publickey for core from 10.0.0.1 port 48556 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:56:29.253000 audit[7139]: CRED_ACQ pid=7139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:29.362182 kernel: audit: type=1101 audit(1769565389.228:1096): pid=7139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:29.362348 kernel: audit: type=1103 audit(1769565389.253:1097): pid=7139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:29.362404 kernel: audit: type=1006 audit(1769565389.253:1098): pid=7139 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=43 res=1 Jan 28 01:56:29.395623 kernel: audit: type=1300 audit(1769565389.253:1098): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff770f3560 a2=3 a3=0 items=0 ppid=1 pid=7139 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=43 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:56:29.253000 audit[7139]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff770f3560 a2=3 a3=0 items=0 ppid=1 pid=7139 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=43 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:56:29.392413 systemd-logind[1586]: New session 43 of user core. Jan 28 01:56:29.442373 kernel: audit: type=1327 audit(1769565389.253:1098): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:56:29.253000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:56:29.468055 systemd[1]: Started session-43.scope - Session 43 of User core. Jan 28 01:56:29.497000 audit[7139]: USER_START pid=7139 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:29.529000 audit[7143]: CRED_ACQ pid=7143 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:29.656060 kernel: audit: type=1105 audit(1769565389.497:1099): pid=7139 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:29.656278 kernel: audit: type=1103 audit(1769565389.529:1100): pid=7143 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:30.215933 kubelet[2967]: E0128 01:56:30.212786 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" podUID="ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9" Jan 28 01:56:30.595800 sshd[7143]: Connection closed by 10.0.0.1 port 48556 Jan 28 01:56:30.614279 sshd-session[7139]: pam_unix(sshd:session): session closed for user core Jan 28 01:56:30.620000 audit[7139]: USER_END pid=7139 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:30.685251 systemd-logind[1586]: Session 43 logged out. Waiting for processes to exit. Jan 28 01:56:30.688171 systemd[1]: sshd@41-10.0.0.85:22-10.0.0.1:48556.service: Deactivated successfully. Jan 28 01:56:30.698771 kernel: audit: type=1106 audit(1769565390.620:1101): pid=7139 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:30.699019 kernel: audit: type=1104 audit(1769565390.625:1102): pid=7139 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:30.625000 audit[7139]: CRED_DISP pid=7139 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:30.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-10.0.0.85:22-10.0.0.1:48556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:56:30.830329 systemd[1]: session-43.scope: Deactivated successfully. Jan 28 01:56:30.868083 systemd-logind[1586]: Removed session 43. Jan 28 01:56:33.226562 kubelet[2967]: E0128 01:56:33.226499 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" podUID="3ef171ed-8146-4d6a-9063-eb31677aa1d4" Jan 28 01:56:35.695157 systemd[1]: Started sshd@42-10.0.0.85:22-10.0.0.1:48056.service - OpenSSH per-connection server daemon (10.0.0.1:48056). Jan 28 01:56:35.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@42-10.0.0.85:22-10.0.0.1:48056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:56:35.713226 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:56:35.715282 kernel: audit: type=1130 audit(1769565395.690:1104): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@42-10.0.0.85:22-10.0.0.1:48056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:56:36.227053 kubelet[2967]: E0128 01:56:36.225799 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nv2sz" podUID="be8a6b52-634d-45dc-a492-0c042b64c6df" Jan 28 01:56:36.308000 audit[7184]: USER_ACCT pid=7184 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:36.327868 sshd-session[7184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:56:36.343488 sshd[7184]: Accepted publickey for core from 10.0.0.1 port 48056 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:56:36.387631 kernel: audit: type=1101 audit(1769565396.308:1105): pid=7184 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:36.387868 kernel: audit: type=1103 audit(1769565396.324:1106): pid=7184 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:36.324000 audit[7184]: CRED_ACQ pid=7184 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:36.399200 systemd-logind[1586]: New session 44 of user core. Jan 28 01:56:36.487400 systemd[1]: Started session-44.scope - Session 44 of User core. Jan 28 01:56:36.556885 kernel: audit: type=1006 audit(1769565396.324:1107): pid=7184 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=44 res=1 Jan 28 01:56:36.324000 audit[7184]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd5bd02ec0 a2=3 a3=0 items=0 ppid=1 pid=7184 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=44 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:56:36.653520 kernel: audit: type=1300 audit(1769565396.324:1107): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd5bd02ec0 a2=3 a3=0 items=0 ppid=1 pid=7184 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=44 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:56:36.324000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:56:36.580000 audit[7184]: USER_START pid=7184 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:36.799404 kernel: audit: type=1327 audit(1769565396.324:1107): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:56:36.799553 kernel: audit: type=1105 audit(1769565396.580:1108): pid=7184 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:36.813064 kernel: audit: type=1103 audit(1769565396.685:1109): pid=7188 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:36.685000 audit[7188]: CRED_ACQ pid=7188 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:37.347891 sshd[7188]: Connection closed by 10.0.0.1 port 48056 Jan 28 01:56:37.355994 sshd-session[7184]: pam_unix(sshd:session): session closed for user core Jan 28 01:56:37.374000 audit[7184]: USER_END pid=7184 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:37.403217 systemd[1]: sshd@42-10.0.0.85:22-10.0.0.1:48056.service: Deactivated successfully. Jan 28 01:56:37.414810 systemd[1]: session-44.scope: Deactivated successfully. Jan 28 01:56:37.423936 systemd-logind[1586]: Session 44 logged out. Waiting for processes to exit. Jan 28 01:56:37.433906 systemd-logind[1586]: Removed session 44. Jan 28 01:56:37.466281 kernel: audit: type=1106 audit(1769565397.374:1110): pid=7184 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:37.466441 kernel: audit: type=1104 audit(1769565397.376:1111): pid=7184 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:37.376000 audit[7184]: CRED_DISP pid=7184 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:37.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@42-10.0.0.85:22-10.0.0.1:48056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:56:38.239273 kubelet[2967]: E0128 01:56:38.236947 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" podUID="67371941-5272-4e0e-84ef-cf7de9065a57" Jan 28 01:56:39.232659 containerd[1609]: time="2026-01-28T01:56:39.232554191Z" level=info msg="container event discarded" container=81419c4ce14500e649a57385ceb2b12e707b01e334d037be7e278cbf0996fe16 type=CONTAINER_CREATED_EVENT Jan 28 01:56:39.282545 kubelet[2967]: E0128 01:56:39.271957 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:56:39.300338 kubelet[2967]: E0128 01:56:39.293301 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb5cb5d8-9zmvs" podUID="f9057416-92cd-485c-b269-9b046834d5f3" Jan 28 01:56:40.394469 containerd[1609]: time="2026-01-28T01:56:40.394354517Z" level=info msg="container event discarded" container=81419c4ce14500e649a57385ceb2b12e707b01e334d037be7e278cbf0996fe16 type=CONTAINER_STARTED_EVENT Jan 28 01:56:42.207900 kubelet[2967]: E0128 01:56:42.204548 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" podUID="ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9" Jan 28 01:56:42.399287 systemd[1]: Started sshd@43-10.0.0.85:22-10.0.0.1:38272.service - OpenSSH per-connection server daemon (10.0.0.1:38272). Jan 28 01:56:42.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@43-10.0.0.85:22-10.0.0.1:38272 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:56:42.428434 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:56:42.428571 kernel: audit: type=1130 audit(1769565402.398:1113): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@43-10.0.0.85:22-10.0.0.1:38272 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:56:42.606000 audit[7205]: USER_ACCT pid=7205 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:42.612509 sshd[7205]: Accepted publickey for core from 10.0.0.1 port 38272 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:56:42.625909 sshd-session[7205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:56:42.642866 kernel: audit: type=1101 audit(1769565402.606:1114): pid=7205 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:42.617000 audit[7205]: CRED_ACQ pid=7205 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:42.675913 systemd-logind[1586]: New session 45 of user core. Jan 28 01:56:42.707512 kernel: audit: type=1103 audit(1769565402.617:1115): pid=7205 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:42.707635 kernel: audit: type=1006 audit(1769565402.617:1116): pid=7205 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=45 res=1 Jan 28 01:56:42.617000 audit[7205]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc48adbf20 a2=3 a3=0 items=0 ppid=1 pid=7205 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=45 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:56:42.617000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:56:42.756452 kernel: audit: type=1300 audit(1769565402.617:1116): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc48adbf20 a2=3 a3=0 items=0 ppid=1 pid=7205 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=45 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:56:42.756585 kernel: audit: type=1327 audit(1769565402.617:1116): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:56:42.761542 systemd[1]: Started session-45.scope - Session 45 of User core. Jan 28 01:56:42.773000 audit[7205]: USER_START pid=7205 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:42.817765 kernel: audit: type=1105 audit(1769565402.773:1117): pid=7205 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:42.819571 kernel: audit: type=1103 audit(1769565402.789:1118): pid=7209 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:42.789000 audit[7209]: CRED_ACQ pid=7209 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:43.557069 sshd[7209]: Connection closed by 10.0.0.1 port 38272 Jan 28 01:56:43.555984 sshd-session[7205]: pam_unix(sshd:session): session closed for user core Jan 28 01:56:43.569000 audit[7205]: USER_END pid=7205 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:43.592801 systemd-logind[1586]: Session 45 logged out. Waiting for processes to exit. Jan 28 01:56:43.605539 systemd[1]: sshd@43-10.0.0.85:22-10.0.0.1:38272.service: Deactivated successfully. Jan 28 01:56:43.621804 systemd[1]: session-45.scope: Deactivated successfully. Jan 28 01:56:43.635968 systemd-logind[1586]: Removed session 45. Jan 28 01:56:43.659421 kernel: audit: type=1106 audit(1769565403.569:1119): pid=7205 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:43.569000 audit[7205]: CRED_DISP pid=7205 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:43.712976 kernel: audit: type=1104 audit(1769565403.569:1120): pid=7205 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:43.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@43-10.0.0.85:22-10.0.0.1:38272 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:56:44.201066 kubelet[2967]: E0128 01:56:44.200369 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" podUID="3ef171ed-8146-4d6a-9063-eb31677aa1d4" Jan 28 01:56:48.645437 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:56:48.645539 kernel: audit: type=1130 audit(1769565408.640:1122): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@44-10.0.0.85:22-10.0.0.1:38276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:56:48.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@44-10.0.0.85:22-10.0.0.1:38276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:56:48.640246 systemd[1]: Started sshd@44-10.0.0.85:22-10.0.0.1:38276.service - OpenSSH per-connection server daemon (10.0.0.1:38276). Jan 28 01:56:48.947000 audit[7222]: USER_ACCT pid=7222 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:48.956311 sshd[7222]: Accepted publickey for core from 10.0.0.1 port 38276 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:56:48.964512 sshd-session[7222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:56:48.993329 containerd[1609]: time="2026-01-28T01:56:48.993237976Z" level=info msg="container event discarded" container=81419c4ce14500e649a57385ceb2b12e707b01e334d037be7e278cbf0996fe16 type=CONTAINER_STOPPED_EVENT Jan 28 01:56:48.997803 kernel: audit: type=1101 audit(1769565408.947:1123): pid=7222 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:48.959000 audit[7222]: CRED_ACQ pid=7222 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:48.999815 systemd-logind[1586]: New session 46 of user core. Jan 28 01:56:49.053633 kernel: audit: type=1103 audit(1769565408.959:1124): pid=7222 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:48.959000 audit[7222]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6e619fb0 a2=3 a3=0 items=0 ppid=1 pid=7222 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=46 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:56:49.105754 systemd[1]: Started session-46.scope - Session 46 of User core. Jan 28 01:56:49.142921 kernel: audit: type=1006 audit(1769565408.959:1125): pid=7222 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=46 res=1 Jan 28 01:56:49.143020 kernel: audit: type=1300 audit(1769565408.959:1125): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6e619fb0 a2=3 a3=0 items=0 ppid=1 pid=7222 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=46 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:56:48.959000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:56:49.161469 kernel: audit: type=1327 audit(1769565408.959:1125): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:56:49.197450 kubelet[2967]: E0128 01:56:49.188519 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" podUID="67371941-5272-4e0e-84ef-cf7de9065a57" Jan 28 01:56:49.197000 audit[7222]: USER_START pid=7222 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:49.209387 kubelet[2967]: E0128 01:56:49.206832 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nv2sz" podUID="be8a6b52-634d-45dc-a492-0c042b64c6df" Jan 28 01:56:49.238541 kernel: audit: type=1105 audit(1769565409.197:1126): pid=7222 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:49.225000 audit[7226]: CRED_ACQ pid=7226 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:49.296937 kernel: audit: type=1103 audit(1769565409.225:1127): pid=7226 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:50.065426 sshd[7226]: Connection closed by 10.0.0.1 port 38276 Jan 28 01:56:50.070424 sshd-session[7222]: pam_unix(sshd:session): session closed for user core Jan 28 01:56:50.073000 audit[7222]: USER_END pid=7222 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:50.099982 systemd[1]: sshd@44-10.0.0.85:22-10.0.0.1:38276.service: Deactivated successfully. Jan 28 01:56:50.107830 systemd-logind[1586]: Session 46 logged out. Waiting for processes to exit. Jan 28 01:56:50.117860 systemd[1]: session-46.scope: Deactivated successfully. Jan 28 01:56:50.138387 systemd-logind[1586]: Removed session 46. Jan 28 01:56:50.160316 kernel: audit: type=1106 audit(1769565410.073:1128): pid=7222 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:50.160472 kernel: audit: type=1104 audit(1769565410.079:1129): pid=7222 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:50.079000 audit[7222]: CRED_DISP pid=7222 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:50.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@44-10.0.0.85:22-10.0.0.1:38276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:56:51.229270 kubelet[2967]: E0128 01:56:51.221883 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:56:54.207573 kubelet[2967]: E0128 01:56:54.203504 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" podUID="ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9" Jan 28 01:56:54.225422 kubelet[2967]: E0128 01:56:54.210013 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb5cb5d8-9zmvs" podUID="f9057416-92cd-485c-b269-9b046834d5f3" Jan 28 01:56:55.188014 systemd[1]: Started sshd@45-10.0.0.85:22-10.0.0.1:36060.service - OpenSSH per-connection server daemon (10.0.0.1:36060). Jan 28 01:56:55.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@45-10.0.0.85:22-10.0.0.1:36060 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:56:55.204565 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:56:55.204782 kernel: audit: type=1130 audit(1769565415.187:1131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@45-10.0.0.85:22-10.0.0.1:36060 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:56:55.221422 kubelet[2967]: E0128 01:56:55.208499 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" podUID="3ef171ed-8146-4d6a-9063-eb31677aa1d4" Jan 28 01:56:55.587000 audit[7241]: USER_ACCT pid=7241 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:55.601066 sshd[7241]: Accepted publickey for core from 10.0.0.1 port 36060 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:56:55.605861 sshd-session[7241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:56:55.637992 systemd-logind[1586]: New session 47 of user core. Jan 28 01:56:55.598000 audit[7241]: CRED_ACQ pid=7241 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:55.663052 kernel: audit: type=1101 audit(1769565415.587:1132): pid=7241 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:55.666305 kernel: audit: type=1103 audit(1769565415.598:1133): pid=7241 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:55.740937 kernel: audit: type=1006 audit(1769565415.598:1134): pid=7241 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=47 res=1 Jan 28 01:56:55.750943 kernel: audit: type=1300 audit(1769565415.598:1134): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec164f030 a2=3 a3=0 items=0 ppid=1 pid=7241 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=47 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:56:55.598000 audit[7241]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec164f030 a2=3 a3=0 items=0 ppid=1 pid=7241 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=47 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:56:55.742710 systemd[1]: Started session-47.scope - Session 47 of User core. Jan 28 01:56:55.776372 kernel: audit: type=1327 audit(1769565415.598:1134): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:56:55.598000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:56:55.786000 audit[7241]: USER_START pid=7241 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:55.815816 kernel: audit: type=1105 audit(1769565415.786:1135): pid=7241 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:55.815940 kernel: audit: type=1103 audit(1769565415.812:1136): pid=7245 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:55.812000 audit[7245]: CRED_ACQ pid=7245 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:56.561078 sshd[7245]: Connection closed by 10.0.0.1 port 36060 Jan 28 01:56:56.564569 sshd-session[7241]: pam_unix(sshd:session): session closed for user core Jan 28 01:56:56.565000 audit[7241]: USER_END pid=7241 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:56.574000 audit[7241]: CRED_DISP pid=7241 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:56.641263 kernel: audit: type=1106 audit(1769565416.565:1137): pid=7241 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:56.641431 kernel: audit: type=1104 audit(1769565416.574:1138): pid=7241 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:56.665348 systemd[1]: sshd@45-10.0.0.85:22-10.0.0.1:36060.service: Deactivated successfully. Jan 28 01:56:56.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@45-10.0.0.85:22-10.0.0.1:36060 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:56:56.677841 systemd[1]: session-47.scope: Deactivated successfully. Jan 28 01:56:56.695873 systemd-logind[1586]: Session 47 logged out. Waiting for processes to exit. Jan 28 01:56:56.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@46-10.0.0.85:22-10.0.0.1:36062 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:56:56.699375 systemd[1]: Started sshd@46-10.0.0.85:22-10.0.0.1:36062.service - OpenSSH per-connection server daemon (10.0.0.1:36062). Jan 28 01:56:56.712617 systemd-logind[1586]: Removed session 47. Jan 28 01:56:57.318000 audit[7258]: USER_ACCT pid=7258 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:57.334900 sshd[7258]: Accepted publickey for core from 10.0.0.1 port 36062 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:56:57.356000 audit[7258]: CRED_ACQ pid=7258 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:57.356000 audit[7258]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcfd5bfb30 a2=3 a3=0 items=0 ppid=1 pid=7258 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=48 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:56:57.356000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:56:57.372588 sshd-session[7258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:56:57.443505 systemd-logind[1586]: New session 48 of user core. Jan 28 01:56:57.472521 systemd[1]: Started session-48.scope - Session 48 of User core. Jan 28 01:56:57.525000 audit[7258]: USER_START pid=7258 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:57.583000 audit[7262]: CRED_ACQ pid=7262 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:59.487465 sshd[7262]: Connection closed by 10.0.0.1 port 36062 Jan 28 01:56:59.490505 sshd-session[7258]: pam_unix(sshd:session): session closed for user core Jan 28 01:56:59.591000 audit[7258]: USER_END pid=7258 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:59.591000 audit[7258]: CRED_DISP pid=7258 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:56:59.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@47-10.0.0.85:22-10.0.0.1:36072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:56:59.630898 systemd[1]: Started sshd@47-10.0.0.85:22-10.0.0.1:36072.service - OpenSSH per-connection server daemon (10.0.0.1:36072). Jan 28 01:56:59.639162 systemd[1]: sshd@46-10.0.0.85:22-10.0.0.1:36062.service: Deactivated successfully. Jan 28 01:56:59.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@46-10.0.0.85:22-10.0.0.1:36062 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:56:59.774581 systemd[1]: session-48.scope: Deactivated successfully. Jan 28 01:56:59.875994 systemd-logind[1586]: Session 48 logged out. Waiting for processes to exit. Jan 28 01:56:59.899299 systemd-logind[1586]: Removed session 48. Jan 28 01:57:00.529000 audit[7271]: USER_ACCT pid=7271 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:00.542250 kernel: kauditd_printk_skb: 13 callbacks suppressed Jan 28 01:57:00.542418 kernel: audit: type=1101 audit(1769565420.529:1150): pid=7271 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:00.542801 sshd[7271]: Accepted publickey for core from 10.0.0.1 port 36072 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:57:00.673000 audit[7271]: CRED_ACQ pid=7271 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:00.688031 sshd-session[7271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:57:00.750068 kernel: audit: type=1103 audit(1769565420.673:1151): pid=7271 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:00.750331 kernel: audit: type=1006 audit(1769565420.673:1152): pid=7271 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=49 res=1 Jan 28 01:57:00.763436 systemd-logind[1586]: New session 49 of user core. Jan 28 01:57:00.815286 kernel: audit: type=1300 audit(1769565420.673:1152): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4df0d910 a2=3 a3=0 items=0 ppid=1 pid=7271 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=49 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:57:00.673000 audit[7271]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4df0d910 a2=3 a3=0 items=0 ppid=1 pid=7271 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=49 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:57:00.673000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:57:00.912181 systemd[1]: Started session-49.scope - Session 49 of User core. Jan 28 01:57:00.929346 kernel: audit: type=1327 audit(1769565420.673:1152): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:57:00.958000 audit[7271]: USER_START pid=7271 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:00.965000 audit[7300]: CRED_ACQ pid=7300 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:01.152084 kernel: audit: type=1105 audit(1769565420.958:1153): pid=7271 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:01.152304 kernel: audit: type=1103 audit(1769565420.965:1154): pid=7300 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:01.238470 kubelet[2967]: E0128 01:57:01.233898 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nv2sz" podUID="be8a6b52-634d-45dc-a492-0c042b64c6df" Jan 28 01:57:03.192792 kubelet[2967]: E0128 01:57:03.192269 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:57:03.194661 kubelet[2967]: E0128 01:57:03.194587 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" podUID="67371941-5272-4e0e-84ef-cf7de9065a57" Jan 28 01:57:04.478222 kubelet[2967]: E0128 01:57:04.473903 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:57:04.621000 audit[7315]: NETFILTER_CFG table=filter:140 family=2 entries=26 op=nft_register_rule pid=7315 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:57:04.635505 kernel: audit: type=1325 audit(1769565424.621:1155): table=filter:140 family=2 entries=26 op=nft_register_rule pid=7315 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:57:04.635629 kernel: audit: type=1300 audit(1769565424.621:1155): arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffee41072a0 a2=0 a3=7ffee410728c items=0 ppid=3078 pid=7315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:57:04.621000 audit[7315]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffee41072a0 a2=0 a3=7ffee410728c items=0 ppid=3078 pid=7315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:57:04.703595 kernel: audit: type=1327 audit(1769565424.621:1155): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:57:04.621000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:57:04.715000 audit[7315]: NETFILTER_CFG table=nat:141 family=2 entries=20 op=nft_register_rule pid=7315 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:57:04.715000 audit[7315]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffee41072a0 a2=0 a3=0 items=0 ppid=3078 pid=7315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:57:04.715000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:57:04.868000 audit[7317]: NETFILTER_CFG table=filter:142 family=2 entries=38 op=nft_register_rule pid=7317 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:57:04.882575 sshd-session[7271]: pam_unix(sshd:session): session closed for user core Jan 28 01:57:04.883858 sshd[7300]: Connection closed by 10.0.0.1 port 36072 Jan 28 01:57:04.868000 audit[7317]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe6dd60de0 a2=0 a3=7ffe6dd60dcc items=0 ppid=3078 pid=7317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:57:04.868000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:57:04.898000 audit[7317]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=7317 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:57:04.898000 audit[7317]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe6dd60de0 a2=0 a3=0 items=0 ppid=3078 pid=7317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:57:04.898000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:57:04.900000 audit[7271]: USER_END pid=7271 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:04.900000 audit[7271]: CRED_DISP pid=7271 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:04.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@47-10.0.0.85:22-10.0.0.1:36072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:57:04.917052 systemd[1]: sshd@47-10.0.0.85:22-10.0.0.1:36072.service: Deactivated successfully. Jan 28 01:57:04.938385 systemd[1]: session-49.scope: Deactivated successfully. Jan 28 01:57:04.950583 systemd-logind[1586]: Session 49 logged out. Waiting for processes to exit. Jan 28 01:57:04.962405 systemd[1]: Started sshd@48-10.0.0.85:22-10.0.0.1:39556.service - OpenSSH per-connection server daemon (10.0.0.1:39556). Jan 28 01:57:04.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@48-10.0.0.85:22-10.0.0.1:39556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:57:04.970069 systemd-logind[1586]: Removed session 49. Jan 28 01:57:05.663160 sshd[7322]: Accepted publickey for core from 10.0.0.1 port 39556 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:57:05.680020 kernel: kauditd_printk_skb: 13 callbacks suppressed Jan 28 01:57:05.692178 kernel: audit: type=1101 audit(1769565425.659:1163): pid=7322 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:05.659000 audit[7322]: USER_ACCT pid=7322 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:05.740928 sshd-session[7322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:57:05.798008 kernel: audit: type=1103 audit(1769565425.720:1164): pid=7322 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:05.720000 audit[7322]: CRED_ACQ pid=7322 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:05.933307 kernel: audit: type=1006 audit(1769565425.720:1165): pid=7322 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=50 res=1 Jan 28 01:57:05.914743 systemd-logind[1586]: New session 50 of user core. Jan 28 01:57:05.720000 audit[7322]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe2acf010 a2=3 a3=0 items=0 ppid=1 pid=7322 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=50 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:57:05.720000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:57:06.095586 kernel: audit: type=1300 audit(1769565425.720:1165): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe2acf010 a2=3 a3=0 items=0 ppid=1 pid=7322 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=50 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:57:06.095834 kernel: audit: type=1327 audit(1769565425.720:1165): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:57:06.108463 systemd[1]: Started session-50.scope - Session 50 of User core. Jan 28 01:57:06.200287 kubelet[2967]: E0128 01:57:06.199282 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:57:06.203771 kubelet[2967]: E0128 01:57:06.203332 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" podUID="3ef171ed-8146-4d6a-9063-eb31677aa1d4" Jan 28 01:57:06.260317 kernel: audit: type=1105 audit(1769565426.208:1166): pid=7322 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:06.208000 audit[7322]: USER_START pid=7322 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:06.214000 audit[7326]: CRED_ACQ pid=7326 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:06.310915 kernel: audit: type=1103 audit(1769565426.214:1167): pid=7326 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:07.323181 kubelet[2967]: E0128 01:57:07.323071 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" podUID="ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9" Jan 28 01:57:08.316795 sshd[7326]: Connection closed by 10.0.0.1 port 39556 Jan 28 01:57:08.319980 sshd-session[7322]: pam_unix(sshd:session): session closed for user core Jan 28 01:57:08.342000 audit[7322]: USER_END pid=7322 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:08.403831 kernel: audit: type=1106 audit(1769565428.342:1168): pid=7322 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:08.343000 audit[7322]: CRED_DISP pid=7322 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:08.448493 systemd[1]: sshd@48-10.0.0.85:22-10.0.0.1:39556.service: Deactivated successfully. Jan 28 01:57:08.471815 kernel: audit: type=1104 audit(1769565428.343:1169): pid=7322 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:08.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@48-10.0.0.85:22-10.0.0.1:39556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:57:08.474031 systemd[1]: session-50.scope: Deactivated successfully. Jan 28 01:57:08.512191 kernel: audit: type=1131 audit(1769565428.454:1170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@48-10.0.0.85:22-10.0.0.1:39556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:57:08.536957 systemd-logind[1586]: Session 50 logged out. Waiting for processes to exit. Jan 28 01:57:08.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@49-10.0.0.85:22-10.0.0.1:39564 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:57:08.558934 systemd[1]: Started sshd@49-10.0.0.85:22-10.0.0.1:39564.service - OpenSSH per-connection server daemon (10.0.0.1:39564). Jan 28 01:57:08.601972 systemd-logind[1586]: Removed session 50. Jan 28 01:57:09.241000 audit[7337]: USER_ACCT pid=7337 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:09.257471 kubelet[2967]: E0128 01:57:09.237168 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb5cb5d8-9zmvs" podUID="f9057416-92cd-485c-b269-9b046834d5f3" Jan 28 01:57:09.258262 sshd[7337]: Accepted publickey for core from 10.0.0.1 port 39564 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:57:09.273000 audit[7337]: CRED_ACQ pid=7337 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:09.279000 audit[7337]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc6a548c0 a2=3 a3=0 items=0 ppid=1 pid=7337 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=51 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:57:09.279000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:57:09.304645 sshd-session[7337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:57:09.356874 systemd-logind[1586]: New session 51 of user core. Jan 28 01:57:09.382931 systemd[1]: Started session-51.scope - Session 51 of User core. Jan 28 01:57:09.430000 audit[7337]: USER_START pid=7337 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:09.440000 audit[7341]: CRED_ACQ pid=7341 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:10.320928 sshd[7341]: Connection closed by 10.0.0.1 port 39564 Jan 28 01:57:10.323772 sshd-session[7337]: pam_unix(sshd:session): session closed for user core Jan 28 01:57:10.347000 audit[7337]: USER_END pid=7337 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:10.347000 audit[7337]: CRED_DISP pid=7337 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:10.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@49-10.0.0.85:22-10.0.0.1:39564 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:57:10.376392 systemd[1]: sshd@49-10.0.0.85:22-10.0.0.1:39564.service: Deactivated successfully. Jan 28 01:57:10.432246 systemd[1]: session-51.scope: Deactivated successfully. Jan 28 01:57:10.470539 systemd-logind[1586]: Session 51 logged out. Waiting for processes to exit. Jan 28 01:57:10.481489 systemd-logind[1586]: Removed session 51. Jan 28 01:57:13.203002 kubelet[2967]: E0128 01:57:13.201343 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nv2sz" podUID="be8a6b52-634d-45dc-a492-0c042b64c6df" Jan 28 01:57:14.256820 kubelet[2967]: E0128 01:57:14.254501 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:57:15.340377 kubelet[2967]: E0128 01:57:15.335234 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:57:15.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@50-10.0.0.85:22-10.0.0.1:50196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:57:15.514446 systemd[1]: Started sshd@50-10.0.0.85:22-10.0.0.1:50196.service - OpenSSH per-connection server daemon (10.0.0.1:50196). Jan 28 01:57:15.538357 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 28 01:57:15.538542 kernel: audit: type=1130 audit(1769565435.509:1180): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@50-10.0.0.85:22-10.0.0.1:50196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:57:16.413000 audit[7357]: USER_ACCT pid=7357 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:16.490965 sshd[7357]: Accepted publickey for core from 10.0.0.1 port 50196 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:57:16.510301 kernel: audit: type=1101 audit(1769565436.413:1181): pid=7357 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:16.504000 audit[7357]: CRED_ACQ pid=7357 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:16.518851 sshd-session[7357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:57:16.575283 systemd-logind[1586]: New session 52 of user core. Jan 28 01:57:16.650608 kernel: audit: type=1103 audit(1769565436.504:1182): pid=7357 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:16.650889 kernel: audit: type=1006 audit(1769565436.504:1183): pid=7357 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=52 res=1 Jan 28 01:57:16.650945 kernel: audit: type=1300 audit(1769565436.504:1183): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffebb805f40 a2=3 a3=0 items=0 ppid=1 pid=7357 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=52 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:57:16.504000 audit[7357]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffebb805f40 a2=3 a3=0 items=0 ppid=1 pid=7357 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=52 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:57:16.801332 kernel: audit: type=1327 audit(1769565436.504:1183): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:57:16.504000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:57:16.787163 systemd[1]: Started session-52.scope - Session 52 of User core. Jan 28 01:57:16.898000 audit[7357]: USER_START pid=7357 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:16.922000 audit[7361]: CRED_ACQ pid=7361 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:17.114251 kernel: audit: type=1105 audit(1769565436.898:1184): pid=7357 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:17.114423 kernel: audit: type=1103 audit(1769565436.922:1185): pid=7361 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:17.234261 kubelet[2967]: E0128 01:57:17.232899 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" podUID="3ef171ed-8146-4d6a-9063-eb31677aa1d4" Jan 28 01:57:17.862061 sshd[7361]: Connection closed by 10.0.0.1 port 50196 Jan 28 01:57:17.871640 sshd-session[7357]: pam_unix(sshd:session): session closed for user core Jan 28 01:57:17.885000 audit[7357]: USER_END pid=7357 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:17.886000 audit[7357]: CRED_DISP pid=7357 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:17.932323 systemd[1]: sshd@50-10.0.0.85:22-10.0.0.1:50196.service: Deactivated successfully. Jan 28 01:57:17.944350 systemd[1]: session-52.scope: Deactivated successfully. Jan 28 01:57:17.953014 kernel: audit: type=1106 audit(1769565437.885:1186): pid=7357 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:17.953158 kernel: audit: type=1104 audit(1769565437.886:1187): pid=7357 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:17.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@50-10.0.0.85:22-10.0.0.1:50196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:57:17.973831 systemd-logind[1586]: Session 52 logged out. Waiting for processes to exit. Jan 28 01:57:17.996494 systemd-logind[1586]: Removed session 52. Jan 28 01:57:18.200385 kubelet[2967]: E0128 01:57:18.195600 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:57:18.201333 kubelet[2967]: E0128 01:57:18.201015 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" podUID="67371941-5272-4e0e-84ef-cf7de9065a57" Jan 28 01:57:18.201333 kubelet[2967]: E0128 01:57:18.201081 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" podUID="ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9" Jan 28 01:57:20.190053 kubelet[2967]: E0128 01:57:20.188829 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:57:21.192608 kubelet[2967]: E0128 01:57:21.187867 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:57:21.192608 kubelet[2967]: E0128 01:57:21.192012 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb5cb5d8-9zmvs" podUID="f9057416-92cd-485c-b269-9b046834d5f3" Jan 28 01:57:22.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@51-10.0.0.85:22-10.0.0.1:35114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:57:22.938191 systemd[1]: Started sshd@51-10.0.0.85:22-10.0.0.1:35114.service - OpenSSH per-connection server daemon (10.0.0.1:35114). Jan 28 01:57:22.967315 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:57:22.967471 kernel: audit: type=1130 audit(1769565442.937:1189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@51-10.0.0.85:22-10.0.0.1:35114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:57:23.333000 audit[7374]: USER_ACCT pid=7374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:23.334077 sshd[7374]: Accepted publickey for core from 10.0.0.1 port 35114 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:57:23.400011 sshd-session[7374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:57:23.440469 systemd-logind[1586]: New session 53 of user core. Jan 28 01:57:23.473950 kernel: audit: type=1101 audit(1769565443.333:1190): pid=7374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:23.474186 kernel: audit: type=1103 audit(1769565443.346:1191): pid=7374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:23.346000 audit[7374]: CRED_ACQ pid=7374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:23.637172 kernel: audit: type=1006 audit(1769565443.346:1192): pid=7374 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=53 res=1 Jan 28 01:57:23.633369 systemd[1]: Started session-53.scope - Session 53 of User core. Jan 28 01:57:23.346000 audit[7374]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc3949400 a2=3 a3=0 items=0 ppid=1 pid=7374 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=53 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:57:23.774288 kernel: audit: type=1300 audit(1769565443.346:1192): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc3949400 a2=3 a3=0 items=0 ppid=1 pid=7374 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=53 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:57:23.774517 kernel: audit: type=1327 audit(1769565443.346:1192): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:57:23.346000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:57:23.716000 audit[7374]: USER_START pid=7374 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:23.888909 kernel: audit: type=1105 audit(1769565443.716:1193): pid=7374 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:23.889157 kernel: audit: type=1103 audit(1769565443.738:1194): pid=7378 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:23.738000 audit[7378]: CRED_ACQ pid=7378 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:24.330224 sshd[7378]: Connection closed by 10.0.0.1 port 35114 Jan 28 01:57:24.333996 sshd-session[7374]: pam_unix(sshd:session): session closed for user core Jan 28 01:57:24.349000 audit[7374]: USER_END pid=7374 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:24.377007 systemd[1]: sshd@51-10.0.0.85:22-10.0.0.1:35114.service: Deactivated successfully. Jan 28 01:57:24.407461 systemd[1]: session-53.scope: Deactivated successfully. Jan 28 01:57:24.349000 audit[7374]: CRED_DISP pid=7374 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:24.418437 systemd-logind[1586]: Session 53 logged out. Waiting for processes to exit. Jan 28 01:57:24.424840 systemd-logind[1586]: Removed session 53. Jan 28 01:57:24.467947 kernel: audit: type=1106 audit(1769565444.349:1195): pid=7374 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:24.479582 kernel: audit: type=1104 audit(1769565444.349:1196): pid=7374 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:24.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@51-10.0.0.85:22-10.0.0.1:35114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:57:25.189365 kubelet[2967]: E0128 01:57:25.188809 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nv2sz" podUID="be8a6b52-634d-45dc-a492-0c042b64c6df" Jan 28 01:57:28.195572 kubelet[2967]: E0128 01:57:28.195274 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:57:29.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@52-10.0.0.85:22-10.0.0.1:35124 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:57:29.380236 systemd[1]: Started sshd@52-10.0.0.85:22-10.0.0.1:35124.service - OpenSSH per-connection server daemon (10.0.0.1:35124). Jan 28 01:57:29.401793 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:57:29.401983 kernel: audit: type=1130 audit(1769565449.378:1198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@52-10.0.0.85:22-10.0.0.1:35124 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:57:29.956000 audit[7392]: USER_ACCT pid=7392 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:29.971975 sshd[7392]: Accepted publickey for core from 10.0.0.1 port 35124 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:57:29.977657 sshd-session[7392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:57:29.993754 kernel: audit: type=1101 audit(1769565449.956:1199): pid=7392 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:29.970000 audit[7392]: CRED_ACQ pid=7392 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:30.024746 kernel: audit: type=1103 audit(1769565449.970:1200): pid=7392 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:30.079990 kernel: audit: type=1006 audit(1769565449.970:1201): pid=7392 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=54 res=1 Jan 28 01:57:29.970000 audit[7392]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca2301bf0 a2=3 a3=0 items=0 ppid=1 pid=7392 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=54 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:57:29.970000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:57:30.175525 kernel: audit: type=1300 audit(1769565449.970:1201): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca2301bf0 a2=3 a3=0 items=0 ppid=1 pid=7392 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=54 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:57:30.175645 kernel: audit: type=1327 audit(1769565449.970:1201): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:57:30.162815 systemd-logind[1586]: New session 54 of user core. Jan 28 01:57:30.209229 systemd[1]: Started session-54.scope - Session 54 of User core. Jan 28 01:57:30.227008 kubelet[2967]: E0128 01:57:30.214296 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" podUID="ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9" Jan 28 01:57:30.276000 audit[7392]: USER_START pid=7392 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:30.364790 kernel: audit: type=1105 audit(1769565450.276:1202): pid=7392 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:30.330000 audit[7396]: CRED_ACQ pid=7396 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:30.386846 kernel: audit: type=1103 audit(1769565450.330:1203): pid=7396 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:30.836240 sshd[7396]: Connection closed by 10.0.0.1 port 35124 Jan 28 01:57:30.833948 sshd-session[7392]: pam_unix(sshd:session): session closed for user core Jan 28 01:57:30.856000 audit[7392]: USER_END pid=7392 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:30.856000 audit[7392]: CRED_DISP pid=7392 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:30.912479 systemd[1]: sshd@52-10.0.0.85:22-10.0.0.1:35124.service: Deactivated successfully. Jan 28 01:57:30.923325 kernel: audit: type=1106 audit(1769565450.856:1204): pid=7392 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:30.923611 kernel: audit: type=1104 audit(1769565450.856:1205): pid=7392 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:30.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@52-10.0.0.85:22-10.0.0.1:35124 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:57:30.934224 systemd[1]: session-54.scope: Deactivated successfully. Jan 28 01:57:30.958281 systemd-logind[1586]: Session 54 logged out. Waiting for processes to exit. Jan 28 01:57:30.983631 systemd-logind[1586]: Removed session 54. Jan 28 01:57:32.193200 kubelet[2967]: E0128 01:57:32.192587 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" podUID="67371941-5272-4e0e-84ef-cf7de9065a57" Jan 28 01:57:32.193959 kubelet[2967]: E0128 01:57:32.192595 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" podUID="3ef171ed-8146-4d6a-9063-eb31677aa1d4" Jan 28 01:57:33.200009 kubelet[2967]: E0128 01:57:33.199861 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb5cb5d8-9zmvs" podUID="f9057416-92cd-485c-b269-9b046834d5f3" Jan 28 01:57:36.003074 systemd[1]: Started sshd@53-10.0.0.85:22-10.0.0.1:46190.service - OpenSSH per-connection server daemon (10.0.0.1:46190). Jan 28 01:57:36.059352 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:57:36.059770 kernel: audit: type=1130 audit(1769565455.996:1207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@53-10.0.0.85:22-10.0.0.1:46190 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:57:35.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@53-10.0.0.85:22-10.0.0.1:46190 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:57:36.432000 audit[7438]: USER_ACCT pid=7438 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:36.447030 sshd[7438]: Accepted publickey for core from 10.0.0.1 port 46190 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:57:36.459590 sshd-session[7438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:57:36.479858 kernel: audit: type=1101 audit(1769565456.432:1208): pid=7438 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:36.452000 audit[7438]: CRED_ACQ pid=7438 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:36.494785 kernel: audit: type=1103 audit(1769565456.452:1209): pid=7438 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:36.494944 kernel: audit: type=1006 audit(1769565456.452:1210): pid=7438 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=55 res=1 Jan 28 01:57:36.516830 kernel: audit: type=1300 audit(1769565456.452:1210): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffcf79b340 a2=3 a3=0 items=0 ppid=1 pid=7438 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=55 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:57:36.452000 audit[7438]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffcf79b340 a2=3 a3=0 items=0 ppid=1 pid=7438 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=55 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:57:36.505238 systemd-logind[1586]: New session 55 of user core. Jan 28 01:57:36.452000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:57:36.523745 kernel: audit: type=1327 audit(1769565456.452:1210): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:57:36.527139 systemd[1]: Started session-55.scope - Session 55 of User core. Jan 28 01:57:36.550000 audit[7438]: USER_START pid=7438 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:36.569945 kernel: audit: type=1105 audit(1769565456.550:1211): pid=7438 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:36.570000 audit[7442]: CRED_ACQ pid=7442 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:36.602786 kernel: audit: type=1103 audit(1769565456.570:1212): pid=7442 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:37.187286 kubelet[2967]: E0128 01:57:37.187244 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nv2sz" podUID="be8a6b52-634d-45dc-a492-0c042b64c6df" Jan 28 01:57:37.193742 sshd[7442]: Connection closed by 10.0.0.1 port 46190 Jan 28 01:57:37.194391 sshd-session[7438]: pam_unix(sshd:session): session closed for user core Jan 28 01:57:37.195000 audit[7438]: USER_END pid=7438 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:37.224057 kernel: audit: type=1106 audit(1769565457.195:1213): pid=7438 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:37.224221 kernel: audit: type=1104 audit(1769565457.195:1214): pid=7438 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:37.195000 audit[7438]: CRED_DISP pid=7438 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:37.214581 systemd[1]: sshd@53-10.0.0.85:22-10.0.0.1:46190.service: Deactivated successfully. Jan 28 01:57:37.221765 systemd[1]: session-55.scope: Deactivated successfully. Jan 28 01:57:37.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@53-10.0.0.85:22-10.0.0.1:46190 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:57:37.229277 systemd-logind[1586]: Session 55 logged out. Waiting for processes to exit. Jan 28 01:57:37.231545 systemd-logind[1586]: Removed session 55. Jan 28 01:57:40.192881 kubelet[2967]: E0128 01:57:40.192745 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:57:41.202757 kubelet[2967]: E0128 01:57:41.202018 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" podUID="ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9" Jan 28 01:57:42.199743 kubelet[2967]: E0128 01:57:42.199510 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:57:42.287268 kubelet[2967]: E0128 01:57:42.287035 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:57:42.408256 systemd[1]: Started sshd@54-10.0.0.85:22-10.0.0.1:46194.service - OpenSSH per-connection server daemon (10.0.0.1:46194). Jan 28 01:57:42.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@54-10.0.0.85:22-10.0.0.1:46194 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:57:42.433558 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:57:42.433792 kernel: audit: type=1130 audit(1769565462.407:1216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@54-10.0.0.85:22-10.0.0.1:46194 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:57:43.033000 audit[7457]: USER_ACCT pid=7457 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:43.058321 sshd-session[7457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:57:43.064231 kernel: audit: type=1101 audit(1769565463.033:1217): pid=7457 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:43.064288 sshd[7457]: Accepted publickey for core from 10.0.0.1 port 46194 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:57:43.045000 audit[7457]: CRED_ACQ pid=7457 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:43.090801 systemd-logind[1586]: New session 56 of user core. Jan 28 01:57:43.103304 kernel: audit: type=1103 audit(1769565463.045:1218): pid=7457 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:43.103533 kernel: audit: type=1006 audit(1769565463.045:1219): pid=7457 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=56 res=1 Jan 28 01:57:43.103590 kernel: audit: type=1300 audit(1769565463.045:1219): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce6fec5b0 a2=3 a3=0 items=0 ppid=1 pid=7457 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=56 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:57:43.045000 audit[7457]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce6fec5b0 a2=3 a3=0 items=0 ppid=1 pid=7457 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=56 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:57:43.150820 kernel: audit: type=1327 audit(1769565463.045:1219): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:57:43.045000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:57:43.139391 systemd[1]: Started session-56.scope - Session 56 of User core. Jan 28 01:57:43.177000 audit[7457]: USER_START pid=7457 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:43.240040 kernel: audit: type=1105 audit(1769565463.177:1220): pid=7457 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:43.240195 kernel: audit: type=1103 audit(1769565463.217:1221): pid=7461 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:43.217000 audit[7461]: CRED_ACQ pid=7461 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:43.740962 sshd[7461]: Connection closed by 10.0.0.1 port 46194 Jan 28 01:57:43.768884 sshd-session[7457]: pam_unix(sshd:session): session closed for user core Jan 28 01:57:43.790000 audit[7457]: USER_END pid=7457 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:43.812734 systemd[1]: sshd@54-10.0.0.85:22-10.0.0.1:46194.service: Deactivated successfully. Jan 28 01:57:43.842455 systemd[1]: session-56.scope: Deactivated successfully. Jan 28 01:57:43.790000 audit[7457]: CRED_DISP pid=7457 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:43.880947 kernel: audit: type=1106 audit(1769565463.790:1222): pid=7457 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:43.881235 kernel: audit: type=1104 audit(1769565463.790:1223): pid=7457 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:43.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@54-10.0.0.85:22-10.0.0.1:46194 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:57:43.884362 systemd-logind[1586]: Session 56 logged out. Waiting for processes to exit. Jan 28 01:57:43.887008 systemd-logind[1586]: Removed session 56. Jan 28 01:57:45.239231 kubelet[2967]: E0128 01:57:45.232070 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" podUID="3ef171ed-8146-4d6a-9063-eb31677aa1d4" Jan 28 01:57:46.206270 kubelet[2967]: E0128 01:57:46.205890 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" podUID="67371941-5272-4e0e-84ef-cf7de9065a57" Jan 28 01:57:48.222830 kubelet[2967]: E0128 01:57:48.221819 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb5cb5d8-9zmvs" podUID="f9057416-92cd-485c-b269-9b046834d5f3" Jan 28 01:57:48.935390 systemd[1]: Started sshd@55-10.0.0.85:22-10.0.0.1:41278.service - OpenSSH per-connection server daemon (10.0.0.1:41278). Jan 28 01:57:48.940269 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:57:48.941041 kernel: audit: type=1130 audit(1769565468.934:1225): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@55-10.0.0.85:22-10.0.0.1:41278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:57:48.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@55-10.0.0.85:22-10.0.0.1:41278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:57:49.462000 audit[7474]: USER_ACCT pid=7474 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:49.470938 sshd[7474]: Accepted publickey for core from 10.0.0.1 port 41278 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:57:49.504444 sshd-session[7474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:57:49.480000 audit[7474]: CRED_ACQ pid=7474 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:49.533240 kernel: audit: type=1101 audit(1769565469.462:1226): pid=7474 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:49.533530 kernel: audit: type=1103 audit(1769565469.480:1227): pid=7474 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:49.533792 kernel: audit: type=1006 audit(1769565469.480:1228): pid=7474 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=57 res=1 Jan 28 01:57:49.559814 systemd-logind[1586]: New session 57 of user core. Jan 28 01:57:49.600848 kernel: audit: type=1300 audit(1769565469.480:1228): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffed0b8c460 a2=3 a3=0 items=0 ppid=1 pid=7474 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=57 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:57:49.480000 audit[7474]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffed0b8c460 a2=3 a3=0 items=0 ppid=1 pid=7474 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=57 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:57:49.480000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:57:49.628596 kernel: audit: type=1327 audit(1769565469.480:1228): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:57:49.627184 systemd[1]: Started session-57.scope - Session 57 of User core. Jan 28 01:57:49.683000 audit[7474]: USER_START pid=7474 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:49.749000 audit[7478]: CRED_ACQ pid=7478 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:49.819037 kernel: audit: type=1105 audit(1769565469.683:1229): pid=7474 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:49.819212 kernel: audit: type=1103 audit(1769565469.749:1230): pid=7478 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:50.229582 kubelet[2967]: E0128 01:57:50.228645 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nv2sz" podUID="be8a6b52-634d-45dc-a492-0c042b64c6df" Jan 28 01:57:50.512945 sshd[7478]: Connection closed by 10.0.0.1 port 41278 Jan 28 01:57:50.514162 sshd-session[7474]: pam_unix(sshd:session): session closed for user core Jan 28 01:57:50.525000 audit[7474]: USER_END pid=7474 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:50.593361 kernel: audit: type=1106 audit(1769565470.525:1231): pid=7474 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:50.570000 audit[7474]: CRED_DISP pid=7474 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:50.615995 systemd[1]: sshd@55-10.0.0.85:22-10.0.0.1:41278.service: Deactivated successfully. Jan 28 01:57:50.620777 kernel: audit: type=1104 audit(1769565470.570:1232): pid=7474 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:50.617330 systemd-logind[1586]: Session 57 logged out. Waiting for processes to exit. Jan 28 01:57:50.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@55-10.0.0.85:22-10.0.0.1:41278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:57:50.636055 systemd[1]: session-57.scope: Deactivated successfully. Jan 28 01:57:50.660513 systemd-logind[1586]: Removed session 57. Jan 28 01:57:55.628432 systemd[1]: Started sshd@56-10.0.0.85:22-10.0.0.1:53772.service - OpenSSH per-connection server daemon (10.0.0.1:53772). Jan 28 01:57:55.683563 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:57:55.683765 kernel: audit: type=1130 audit(1769565475.626:1234): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@56-10.0.0.85:22-10.0.0.1:53772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:57:55.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@56-10.0.0.85:22-10.0.0.1:53772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:57:55.925735 containerd[1609]: time="2026-01-28T01:57:55.920975820Z" level=info msg="container event discarded" container=cfe1d753dc41ba4f5abc7edf74b3294df038bf0a1abd52ceb41d9421bf1f7d14 type=CONTAINER_CREATED_EVENT Jan 28 01:57:56.019000 audit[7491]: USER_ACCT pid=7491 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:56.023482 sshd[7491]: Accepted publickey for core from 10.0.0.1 port 53772 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:57:56.039819 sshd-session[7491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:57:56.059995 kernel: audit: type=1101 audit(1769565476.019:1235): pid=7491 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:56.027000 audit[7491]: CRED_ACQ pid=7491 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:56.101361 kernel: audit: type=1103 audit(1769565476.027:1236): pid=7491 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:56.102417 kernel: audit: type=1006 audit(1769565476.027:1237): pid=7491 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=58 res=1 Jan 28 01:57:56.027000 audit[7491]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff96a0e200 a2=3 a3=0 items=0 ppid=1 pid=7491 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=58 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:57:56.111776 systemd-logind[1586]: New session 58 of user core. Jan 28 01:57:56.027000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:57:56.153286 kernel: audit: type=1300 audit(1769565476.027:1237): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff96a0e200 a2=3 a3=0 items=0 ppid=1 pid=7491 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=58 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:57:56.154050 kernel: audit: type=1327 audit(1769565476.027:1237): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:57:56.163631 systemd[1]: Started session-58.scope - Session 58 of User core. Jan 28 01:57:56.189000 audit[7491]: USER_START pid=7491 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:56.208868 kubelet[2967]: E0128 01:57:56.208654 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" podUID="ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9" Jan 28 01:57:56.242889 kernel: audit: type=1105 audit(1769565476.189:1238): pid=7491 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:56.243625 kernel: audit: type=1103 audit(1769565476.213:1239): pid=7495 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:56.213000 audit[7495]: CRED_ACQ pid=7495 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:56.500000 audit[7505]: NETFILTER_CFG table=filter:144 family=2 entries=26 op=nft_register_rule pid=7505 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:57:56.523429 kernel: audit: type=1325 audit(1769565476.500:1240): table=filter:144 family=2 entries=26 op=nft_register_rule pid=7505 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:57:56.524882 kernel: audit: type=1300 audit(1769565476.500:1240): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffec14a52b0 a2=0 a3=7ffec14a529c items=0 ppid=3078 pid=7505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:57:56.500000 audit[7505]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffec14a52b0 a2=0 a3=7ffec14a529c items=0 ppid=3078 pid=7505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:57:56.500000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:57:56.558000 audit[7505]: NETFILTER_CFG table=nat:145 family=2 entries=104 op=nft_register_chain pid=7505 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:57:56.558000 audit[7505]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffec14a52b0 a2=0 a3=7ffec14a529c items=0 ppid=3078 pid=7505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:57:56.558000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:57:56.636820 sshd[7495]: Connection closed by 10.0.0.1 port 53772 Jan 28 01:57:56.640248 sshd-session[7491]: pam_unix(sshd:session): session closed for user core Jan 28 01:57:56.656000 audit[7491]: USER_END pid=7491 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:56.656000 audit[7491]: CRED_DISP pid=7491 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:57:56.701890 systemd[1]: sshd@56-10.0.0.85:22-10.0.0.1:53772.service: Deactivated successfully. Jan 28 01:57:56.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@56-10.0.0.85:22-10.0.0.1:53772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:57:56.727855 systemd[1]: session-58.scope: Deactivated successfully. Jan 28 01:57:56.740069 systemd-logind[1586]: Session 58 logged out. Waiting for processes to exit. Jan 28 01:57:56.760953 systemd-logind[1586]: Removed session 58. Jan 28 01:57:57.202827 kubelet[2967]: E0128 01:57:57.199915 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" podUID="67371941-5272-4e0e-84ef-cf7de9065a57" Jan 28 01:57:57.205260 kubelet[2967]: E0128 01:57:57.204588 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:57:57.350913 containerd[1609]: time="2026-01-28T01:57:57.348835627Z" level=info msg="container event discarded" container=cfe1d753dc41ba4f5abc7edf74b3294df038bf0a1abd52ceb41d9421bf1f7d14 type=CONTAINER_STARTED_EVENT Jan 28 01:57:59.189990 kubelet[2967]: E0128 01:57:59.189915 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" podUID="3ef171ed-8146-4d6a-9063-eb31677aa1d4" Jan 28 01:57:59.193143 kubelet[2967]: E0128 01:57:59.191783 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb5cb5d8-9zmvs" podUID="f9057416-92cd-485c-b269-9b046834d5f3" Jan 28 01:58:01.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@57-10.0.0.85:22-10.0.0.1:53784 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:58:01.737427 systemd[1]: Started sshd@57-10.0.0.85:22-10.0.0.1:53784.service - OpenSSH per-connection server daemon (10.0.0.1:53784). Jan 28 01:58:01.772870 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 28 01:58:01.772976 kernel: audit: type=1130 audit(1769565481.736:1245): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@57-10.0.0.85:22-10.0.0.1:53784 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:58:02.136000 audit[7546]: USER_ACCT pid=7546 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:02.168296 sshd-session[7546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:58:02.169568 sshd[7546]: Accepted publickey for core from 10.0.0.1 port 53784 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:58:02.237316 kernel: audit: type=1101 audit(1769565482.136:1246): pid=7546 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:02.237442 kernel: audit: type=1103 audit(1769565482.152:1247): pid=7546 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:02.152000 audit[7546]: CRED_ACQ pid=7546 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:02.237569 kubelet[2967]: E0128 01:58:02.229233 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nv2sz" podUID="be8a6b52-634d-45dc-a492-0c042b64c6df" Jan 28 01:58:02.298766 kernel: audit: type=1006 audit(1769565482.152:1248): pid=7546 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=59 res=1 Jan 28 01:58:02.152000 audit[7546]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffed2c95390 a2=3 a3=0 items=0 ppid=1 pid=7546 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=59 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:58:02.304468 systemd-logind[1586]: New session 59 of user core. Jan 28 01:58:02.152000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:58:02.395273 kernel: audit: type=1300 audit(1769565482.152:1248): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffed2c95390 a2=3 a3=0 items=0 ppid=1 pid=7546 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=59 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:58:02.395411 kernel: audit: type=1327 audit(1769565482.152:1248): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:58:02.420617 systemd[1]: Started session-59.scope - Session 59 of User core. Jan 28 01:58:02.442000 audit[7546]: USER_START pid=7546 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:02.526436 kernel: audit: type=1105 audit(1769565482.442:1249): pid=7546 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:02.526593 kernel: audit: type=1103 audit(1769565482.476:1250): pid=7550 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:02.476000 audit[7550]: CRED_ACQ pid=7550 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:03.503791 sshd[7550]: Connection closed by 10.0.0.1 port 53784 Jan 28 01:58:03.508019 sshd-session[7546]: pam_unix(sshd:session): session closed for user core Jan 28 01:58:03.526000 audit[7546]: USER_END pid=7546 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:03.592011 kernel: audit: type=1106 audit(1769565483.526:1251): pid=7546 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:03.587806 systemd[1]: sshd@57-10.0.0.85:22-10.0.0.1:53784.service: Deactivated successfully. Jan 28 01:58:03.610248 kernel: audit: type=1104 audit(1769565483.529:1252): pid=7546 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:03.529000 audit[7546]: CRED_DISP pid=7546 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:03.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@57-10.0.0.85:22-10.0.0.1:53784 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:58:03.623927 systemd-logind[1586]: Session 59 logged out. Waiting for processes to exit. Jan 28 01:58:03.627897 systemd[1]: session-59.scope: Deactivated successfully. Jan 28 01:58:03.681470 systemd-logind[1586]: Removed session 59. Jan 28 01:58:06.885938 containerd[1609]: time="2026-01-28T01:58:06.883350706Z" level=info msg="container event discarded" container=79aa262556f27f12e6145b5d454acc06ea81ff31a7e90a91996bdae98861cd44 type=CONTAINER_CREATED_EVENT Jan 28 01:58:06.885938 containerd[1609]: time="2026-01-28T01:58:06.883409547Z" level=info msg="container event discarded" container=79aa262556f27f12e6145b5d454acc06ea81ff31a7e90a91996bdae98861cd44 type=CONTAINER_STARTED_EVENT Jan 28 01:58:07.320271 containerd[1609]: time="2026-01-28T01:58:07.318606227Z" level=info msg="container event discarded" container=1a06a69ed842454f7b4a8690431a86c6694ed54ad50513664b73b4f8fa09189a type=CONTAINER_CREATED_EVENT Jan 28 01:58:07.320271 containerd[1609]: time="2026-01-28T01:58:07.319887634Z" level=info msg="container event discarded" container=1a06a69ed842454f7b4a8690431a86c6694ed54ad50513664b73b4f8fa09189a type=CONTAINER_STARTED_EVENT Jan 28 01:58:07.415479 containerd[1609]: time="2026-01-28T01:58:07.410643778Z" level=info msg="container event discarded" container=eae35b934e74ee3fe543d9ccb88fbbb467205fe70525fb65e1bdc44b893f3cc9 type=CONTAINER_CREATED_EVENT Jan 28 01:58:07.415479 containerd[1609]: time="2026-01-28T01:58:07.412376266Z" level=info msg="container event discarded" container=eae35b934e74ee3fe543d9ccb88fbbb467205fe70525fb65e1bdc44b893f3cc9 type=CONTAINER_STARTED_EVENT Jan 28 01:58:07.751600 containerd[1609]: time="2026-01-28T01:58:07.745050006Z" level=info msg="container event discarded" container=2b2442e74922671b04d854b397946fed8bed17cb4e89548493fcb927250c7a57 type=CONTAINER_CREATED_EVENT Jan 28 01:58:08.231657 kubelet[2967]: E0128 01:58:08.218907 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:58:08.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@58-10.0.0.85:22-10.0.0.1:36680 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:58:08.541259 systemd[1]: Started sshd@58-10.0.0.85:22-10.0.0.1:36680.service - OpenSSH per-connection server daemon (10.0.0.1:36680). Jan 28 01:58:08.580790 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:58:08.580967 kernel: audit: type=1130 audit(1769565488.537:1254): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@58-10.0.0.85:22-10.0.0.1:36680 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:58:09.004000 audit[7571]: USER_ACCT pid=7571 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:09.008860 sshd[7571]: Accepted publickey for core from 10.0.0.1 port 36680 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:58:09.025868 sshd-session[7571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:58:09.086863 kernel: audit: type=1101 audit(1769565489.004:1255): pid=7571 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:09.015000 audit[7571]: CRED_ACQ pid=7571 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:09.121068 systemd-logind[1586]: New session 60 of user core. Jan 28 01:58:09.183802 kernel: audit: type=1103 audit(1769565489.015:1256): pid=7571 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:09.183965 kernel: audit: type=1006 audit(1769565489.015:1257): pid=7571 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=60 res=1 Jan 28 01:58:09.015000 audit[7571]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff4f110f0 a2=3 a3=0 items=0 ppid=1 pid=7571 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=60 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:58:09.191277 systemd[1]: Started session-60.scope - Session 60 of User core. Jan 28 01:58:09.015000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:58:09.228536 kernel: audit: type=1300 audit(1769565489.015:1257): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff4f110f0 a2=3 a3=0 items=0 ppid=1 pid=7571 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=60 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:58:09.228659 kernel: audit: type=1327 audit(1769565489.015:1257): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:58:09.229000 audit[7571]: USER_START pid=7571 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:09.287755 kernel: audit: type=1105 audit(1769565489.229:1258): pid=7571 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:09.294000 audit[7575]: CRED_ACQ pid=7575 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:09.337244 kernel: audit: type=1103 audit(1769565489.294:1259): pid=7575 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:09.774871 containerd[1609]: time="2026-01-28T01:58:09.773413637Z" level=info msg="container event discarded" container=2b2442e74922671b04d854b397946fed8bed17cb4e89548493fcb927250c7a57 type=CONTAINER_STARTED_EVENT Jan 28 01:58:09.830245 sshd[7575]: Connection closed by 10.0.0.1 port 36680 Jan 28 01:58:09.832286 sshd-session[7571]: pam_unix(sshd:session): session closed for user core Jan 28 01:58:09.842000 audit[7571]: USER_END pid=7571 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:09.861013 systemd[1]: sshd@58-10.0.0.85:22-10.0.0.1:36680.service: Deactivated successfully. Jan 28 01:58:09.861651 systemd-logind[1586]: Session 60 logged out. Waiting for processes to exit. Jan 28 01:58:09.886654 systemd[1]: session-60.scope: Deactivated successfully. Jan 28 01:58:09.900247 systemd-logind[1586]: Removed session 60. Jan 28 01:58:09.905449 kernel: audit: type=1106 audit(1769565489.842:1260): pid=7571 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:09.844000 audit[7571]: CRED_DISP pid=7571 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:09.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@58-10.0.0.85:22-10.0.0.1:36680 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:58:09.970658 kernel: audit: type=1104 audit(1769565489.844:1261): pid=7571 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:10.223251 kubelet[2967]: E0128 01:58:10.221335 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ms9md" podUID="d33e070d-1851-4242-98ee-97e68b203245" Jan 28 01:58:10.225855 kubelet[2967]: E0128 01:58:10.225745 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-849fc56f8-v9sqx" podUID="67371941-5272-4e0e-84ef-cf7de9065a57" Jan 28 01:58:11.272631 kubelet[2967]: E0128 01:58:11.265769 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" podUID="3ef171ed-8146-4d6a-9063-eb31677aa1d4" Jan 28 01:58:11.284947 kubelet[2967]: E0128 01:58:11.282990 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" podUID="ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9" Jan 28 01:58:13.214192 kubelet[2967]: E0128 01:58:13.210167 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb5cb5d8-9zmvs" podUID="f9057416-92cd-485c-b269-9b046834d5f3" Jan 28 01:58:13.604038 containerd[1609]: time="2026-01-28T01:58:13.602557982Z" level=info msg="container event discarded" container=a2f44c7575cdf31470369af168238ff90920b90e10f426a2be08efde461a1284 type=CONTAINER_CREATED_EVENT Jan 28 01:58:13.604038 containerd[1609]: time="2026-01-28T01:58:13.602621822Z" level=info msg="container event discarded" container=a2f44c7575cdf31470369af168238ff90920b90e10f426a2be08efde461a1284 type=CONTAINER_STARTED_EVENT Jan 28 01:58:14.215584 kubelet[2967]: E0128 01:58:14.206323 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nv2sz" podUID="be8a6b52-634d-45dc-a492-0c042b64c6df" Jan 28 01:58:14.985248 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:58:14.985632 kernel: audit: type=1130 audit(1769565494.931:1263): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@59-10.0.0.85:22-10.0.0.1:33002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:58:14.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@59-10.0.0.85:22-10.0.0.1:33002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:58:14.933474 systemd[1]: Started sshd@59-10.0.0.85:22-10.0.0.1:33002.service - OpenSSH per-connection server daemon (10.0.0.1:33002). Jan 28 01:58:15.446530 sshd[7590]: Accepted publickey for core from 10.0.0.1 port 33002 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:58:15.440000 audit[7590]: USER_ACCT pid=7590 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:15.458200 sshd-session[7590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:58:15.498459 kernel: audit: type=1101 audit(1769565495.440:1264): pid=7590 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:15.498603 kernel: audit: type=1103 audit(1769565495.448:1265): pid=7590 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:15.448000 audit[7590]: CRED_ACQ pid=7590 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:15.519490 systemd-logind[1586]: New session 61 of user core. Jan 28 01:58:15.454000 audit[7590]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce87bea90 a2=3 a3=0 items=0 ppid=1 pid=7590 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=61 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:58:15.628320 kernel: audit: type=1006 audit(1769565495.454:1266): pid=7590 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=61 res=1 Jan 28 01:58:15.628484 kernel: audit: type=1300 audit(1769565495.454:1266): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce87bea90 a2=3 a3=0 items=0 ppid=1 pid=7590 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=61 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:58:15.454000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:58:15.647892 systemd[1]: Started session-61.scope - Session 61 of User core. Jan 28 01:58:15.652365 kernel: audit: type=1327 audit(1769565495.454:1266): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:58:15.708000 audit[7590]: USER_START pid=7590 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:15.747805 kernel: audit: type=1105 audit(1769565495.708:1267): pid=7590 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:15.758000 audit[7594]: CRED_ACQ pid=7594 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:15.793766 kernel: audit: type=1103 audit(1769565495.758:1268): pid=7594 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:16.497754 sshd[7594]: Connection closed by 10.0.0.1 port 33002 Jan 28 01:58:16.500008 sshd-session[7590]: pam_unix(sshd:session): session closed for user core Jan 28 01:58:16.503000 audit[7590]: USER_END pid=7590 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:16.572601 containerd[1609]: time="2026-01-28T01:58:16.547260084Z" level=info msg="container event discarded" container=497187648e1a4623a83de8f8cb8da263e2a60280d4ac34016d3a41c75e0337c3 type=CONTAINER_CREATED_EVENT Jan 28 01:58:16.572601 containerd[1609]: time="2026-01-28T01:58:16.547321529Z" level=info msg="container event discarded" container=497187648e1a4623a83de8f8cb8da263e2a60280d4ac34016d3a41c75e0337c3 type=CONTAINER_STARTED_EVENT Jan 28 01:58:16.512504 systemd[1]: sshd@59-10.0.0.85:22-10.0.0.1:33002.service: Deactivated successfully. Jan 28 01:58:16.519246 systemd[1]: session-61.scope: Deactivated successfully. Jan 28 01:58:16.523168 systemd-logind[1586]: Session 61 logged out. Waiting for processes to exit. Jan 28 01:58:16.531019 systemd-logind[1586]: Removed session 61. Jan 28 01:58:16.503000 audit[7590]: CRED_DISP pid=7590 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:16.592231 kernel: audit: type=1106 audit(1769565496.503:1269): pid=7590 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:16.592340 kernel: audit: type=1104 audit(1769565496.503:1270): pid=7590 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:16.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@59-10.0.0.85:22-10.0.0.1:33002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:58:18.215187 kubelet[2967]: E0128 01:58:18.213554 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:58:18.335497 containerd[1609]: time="2026-01-28T01:58:18.330142962Z" level=info msg="container event discarded" container=781d351f3b53b12c56e2c941fe38b7f973672276d0a816bf192907bac63936e9 type=CONTAINER_CREATED_EVENT Jan 28 01:58:18.335497 containerd[1609]: time="2026-01-28T01:58:18.330239651Z" level=info msg="container event discarded" container=781d351f3b53b12c56e2c941fe38b7f973672276d0a816bf192907bac63936e9 type=CONTAINER_STARTED_EVENT Jan 28 01:58:18.998202 containerd[1609]: time="2026-01-28T01:58:18.995358975Z" level=info msg="container event discarded" container=e678ed04901bb1e0782c158bf82ec6681f27ddb64ac555a894e260028106831b type=CONTAINER_CREATED_EVENT Jan 28 01:58:18.998202 containerd[1609]: time="2026-01-28T01:58:18.995499047Z" level=info msg="container event discarded" container=e678ed04901bb1e0782c158bf82ec6681f27ddb64ac555a894e260028106831b type=CONTAINER_STARTED_EVENT Jan 28 01:58:19.935565 containerd[1609]: time="2026-01-28T01:58:19.935374405Z" level=info msg="container event discarded" container=e07a46d6a49c43c2927ed954d5dd72f23cc4e56b36163320fec238f45b8537f9 type=CONTAINER_CREATED_EVENT Jan 28 01:58:19.936516 containerd[1609]: time="2026-01-28T01:58:19.936363037Z" level=info msg="container event discarded" container=e07a46d6a49c43c2927ed954d5dd72f23cc4e56b36163320fec238f45b8537f9 type=CONTAINER_STARTED_EVENT Jan 28 01:58:20.196812 containerd[1609]: time="2026-01-28T01:58:20.196477139Z" level=info msg="container event discarded" container=c2fa77a97b7ddbd6535c1e7ab7a25c5b812d4273c9f2335565e07e213161eca2 type=CONTAINER_CREATED_EVENT Jan 28 01:58:20.925246 containerd[1609]: time="2026-01-28T01:58:20.923844420Z" level=info msg="container event discarded" container=c2fa77a97b7ddbd6535c1e7ab7a25c5b812d4273c9f2335565e07e213161eca2 type=CONTAINER_STARTED_EVENT Jan 28 01:58:21.546319 systemd[1]: Started sshd@60-10.0.0.85:22-10.0.0.1:33016.service - OpenSSH per-connection server daemon (10.0.0.1:33016). Jan 28 01:58:21.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@60-10.0.0.85:22-10.0.0.1:33016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:58:21.578419 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:58:21.578507 kernel: audit: type=1130 audit(1769565501.544:1272): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@60-10.0.0.85:22-10.0.0.1:33016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:58:21.834000 audit[7608]: USER_ACCT pid=7608 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:21.856069 sshd[7608]: Accepted publickey for core from 10.0.0.1 port 33016 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 01:58:21.868802 sshd-session[7608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:58:21.902319 kernel: audit: type=1101 audit(1769565501.834:1273): pid=7608 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:21.902465 kernel: audit: type=1103 audit(1769565501.846:1274): pid=7608 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:21.846000 audit[7608]: CRED_ACQ pid=7608 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:21.961918 kernel: audit: type=1006 audit(1769565501.846:1275): pid=7608 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=62 res=1 Jan 28 01:58:21.967012 systemd-logind[1586]: New session 62 of user core. Jan 28 01:58:21.846000 audit[7608]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd8c1f520 a2=3 a3=0 items=0 ppid=1 pid=7608 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=62 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:58:22.018763 kernel: audit: type=1300 audit(1769565501.846:1275): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd8c1f520 a2=3 a3=0 items=0 ppid=1 pid=7608 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=62 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:58:22.018931 kernel: audit: type=1327 audit(1769565501.846:1275): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:58:21.846000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:58:22.032564 systemd[1]: Started session-62.scope - Session 62 of User core. Jan 28 01:58:22.051000 audit[7608]: USER_START pid=7608 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:22.065000 audit[7612]: CRED_ACQ pid=7612 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:22.141930 kernel: audit: type=1105 audit(1769565502.051:1276): pid=7608 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:22.142141 kernel: audit: type=1103 audit(1769565502.065:1277): pid=7612 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:22.216408 kubelet[2967]: E0128 01:58:22.210734 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mbn64" podUID="ae5a1f75-fd39-4d6a-a16f-43b6b8db37e9" Jan 28 01:58:22.216408 kubelet[2967]: E0128 01:58:22.213613 2967 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654b4ddbfd-mgclm" podUID="3ef171ed-8146-4d6a-9063-eb31677aa1d4" Jan 28 01:58:22.533530 sshd[7612]: Connection closed by 10.0.0.1 port 33016 Jan 28 01:58:22.535528 sshd-session[7608]: pam_unix(sshd:session): session closed for user core Jan 28 01:58:22.538000 audit[7608]: USER_END pid=7608 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:22.548376 systemd[1]: sshd@60-10.0.0.85:22-10.0.0.1:33016.service: Deactivated successfully. Jan 28 01:58:22.567651 systemd[1]: session-62.scope: Deactivated successfully. Jan 28 01:58:22.595365 kernel: audit: type=1106 audit(1769565502.538:1278): pid=7608 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:22.538000 audit[7608]: CRED_DISP pid=7608 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:22.622875 systemd-logind[1586]: Session 62 logged out. Waiting for processes to exit. Jan 28 01:58:22.634215 kernel: audit: type=1104 audit(1769565502.538:1279): pid=7608 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 01:58:22.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@60-10.0.0.85:22-10.0.0.1:33016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:58:22.634479 systemd-logind[1586]: Removed session 62. Jan 28 01:58:23.188757 kubelet[2967]: E0128 01:58:23.187577 2967 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"