Jan 16 21:16:27.929455 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 18:44:02 -00 2026 Jan 16 21:16:27.929487 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e880b5400e832e1de59b993d9ba6b86a9089175f10b4985da8b7b47cc8c74099 Jan 16 21:16:27.929503 kernel: BIOS-provided physical RAM map: Jan 16 21:16:27.929513 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 16 21:16:27.929523 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 16 21:16:27.929532 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 16 21:16:27.929544 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 16 21:16:27.929554 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 16 21:16:27.929565 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 16 21:16:27.929576 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 16 21:16:27.929589 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 16 21:16:27.929599 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 16 21:16:27.929609 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 16 21:16:27.929620 kernel: NX (Execute Disable) protection: active Jan 16 21:16:27.929633 kernel: APIC: Static calls initialized Jan 16 21:16:27.929647 kernel: SMBIOS 2.8 present. Jan 16 21:16:27.929658 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 16 21:16:27.929668 kernel: DMI: Memory slots populated: 1/1 Jan 16 21:16:27.929679 kernel: Hypervisor detected: KVM Jan 16 21:16:27.929689 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 16 21:16:27.929699 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 16 21:16:27.929710 kernel: kvm-clock: using sched offset of 12704247761 cycles Jan 16 21:16:27.929721 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 16 21:16:27.929733 kernel: tsc: Detected 2445.426 MHz processor Jan 16 21:16:27.929747 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 16 21:16:27.929760 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 16 21:16:27.929771 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 16 21:16:27.929783 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 16 21:16:27.929795 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 16 21:16:27.929807 kernel: Using GB pages for direct mapping Jan 16 21:16:27.929820 kernel: ACPI: Early table checksum verification disabled Jan 16 21:16:27.929834 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 16 21:16:27.929844 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 21:16:27.929854 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 21:16:27.929863 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 21:16:27.929875 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 16 21:16:27.929886 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 21:16:27.929896 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 21:16:27.929909 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 21:16:27.929922 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 21:16:27.929941 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 16 21:16:27.929951 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 16 21:16:27.929963 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 16 21:16:27.929975 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 16 21:16:27.929988 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 16 21:16:27.929998 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 16 21:16:27.930011 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 16 21:16:27.930021 kernel: No NUMA configuration found Jan 16 21:16:27.930031 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 16 21:16:27.930042 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 16 21:16:27.930058 kernel: Zone ranges: Jan 16 21:16:27.930068 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 16 21:16:27.930078 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 16 21:16:27.930090 kernel: Normal empty Jan 16 21:16:27.930101 kernel: Device empty Jan 16 21:16:27.930111 kernel: Movable zone start for each node Jan 16 21:16:27.930121 kernel: Early memory node ranges Jan 16 21:16:27.930137 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 16 21:16:27.930147 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 16 21:16:27.930157 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 16 21:16:27.930168 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 16 21:16:27.930180 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 16 21:16:27.930190 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 16 21:16:27.930200 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 16 21:16:27.930213 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 16 21:16:27.930226 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 16 21:16:27.930236 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 16 21:16:27.930249 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 16 21:16:27.930259 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 16 21:16:27.930269 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 16 21:16:27.930433 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 16 21:16:27.930445 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 16 21:16:27.930461 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 16 21:16:27.930471 kernel: TSC deadline timer available Jan 16 21:16:27.930483 kernel: CPU topo: Max. logical packages: 1 Jan 16 21:16:27.930494 kernel: CPU topo: Max. logical dies: 1 Jan 16 21:16:27.930504 kernel: CPU topo: Max. dies per package: 1 Jan 16 21:16:27.930513 kernel: CPU topo: Max. threads per core: 1 Jan 16 21:16:27.930527 kernel: CPU topo: Num. cores per package: 4 Jan 16 21:16:27.930536 kernel: CPU topo: Num. threads per package: 4 Jan 16 21:16:27.930550 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 16 21:16:27.930561 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 16 21:16:27.930572 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 16 21:16:27.930582 kernel: kvm-guest: setup PV sched yield Jan 16 21:16:27.930592 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 16 21:16:27.930606 kernel: Booting paravirtualized kernel on KVM Jan 16 21:16:27.930616 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 16 21:16:27.930630 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 16 21:16:27.930642 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 16 21:16:27.930653 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 16 21:16:27.930663 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 16 21:16:27.930672 kernel: kvm-guest: PV spinlocks enabled Jan 16 21:16:27.930686 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 16 21:16:27.930697 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e880b5400e832e1de59b993d9ba6b86a9089175f10b4985da8b7b47cc8c74099 Jan 16 21:16:27.930710 kernel: random: crng init done Jan 16 21:16:27.930724 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 16 21:16:27.930734 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 16 21:16:27.930744 kernel: Fallback order for Node 0: 0 Jan 16 21:16:27.930754 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 16 21:16:27.930766 kernel: Policy zone: DMA32 Jan 16 21:16:27.930776 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 16 21:16:27.930790 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 16 21:16:27.930803 kernel: ftrace: allocating 40128 entries in 157 pages Jan 16 21:16:27.930813 kernel: ftrace: allocated 157 pages with 5 groups Jan 16 21:16:27.930823 kernel: Dynamic Preempt: voluntary Jan 16 21:16:27.930833 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 16 21:16:27.930847 kernel: rcu: RCU event tracing is enabled. Jan 16 21:16:27.930858 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 16 21:16:27.930871 kernel: Trampoline variant of Tasks RCU enabled. Jan 16 21:16:27.930885 kernel: Rude variant of Tasks RCU enabled. Jan 16 21:16:27.930895 kernel: Tracing variant of Tasks RCU enabled. Jan 16 21:16:27.930905 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 16 21:16:27.930916 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 16 21:16:27.930928 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 16 21:16:27.930938 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 16 21:16:27.930948 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 16 21:16:27.930964 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 16 21:16:27.930975 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 16 21:16:27.930994 kernel: Console: colour VGA+ 80x25 Jan 16 21:16:27.931009 kernel: printk: legacy console [ttyS0] enabled Jan 16 21:16:27.931019 kernel: ACPI: Core revision 20240827 Jan 16 21:16:27.931030 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 16 21:16:27.931043 kernel: APIC: Switch to symmetric I/O mode setup Jan 16 21:16:27.931053 kernel: x2apic enabled Jan 16 21:16:27.931064 kernel: APIC: Switched APIC routing to: physical x2apic Jan 16 21:16:27.931081 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 16 21:16:27.931092 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 16 21:16:27.931102 kernel: kvm-guest: setup PV IPIs Jan 16 21:16:27.931114 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 16 21:16:27.931130 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 16 21:16:27.931140 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 16 21:16:27.931152 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 16 21:16:27.931164 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 16 21:16:27.931175 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 16 21:16:27.931186 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 16 21:16:27.931199 kernel: Spectre V2 : Mitigation: Retpolines Jan 16 21:16:27.931213 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 16 21:16:27.931224 kernel: Speculative Store Bypass: Vulnerable Jan 16 21:16:27.931237 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 16 21:16:27.931249 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 16 21:16:27.931259 kernel: active return thunk: srso_alias_return_thunk Jan 16 21:16:27.931273 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 16 21:16:27.931449 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 16 21:16:27.931464 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 16 21:16:27.931476 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 16 21:16:27.931487 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 16 21:16:27.931497 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 16 21:16:27.931508 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 16 21:16:27.931522 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 16 21:16:27.931536 kernel: Freeing SMP alternatives memory: 32K Jan 16 21:16:27.931546 kernel: pid_max: default: 32768 minimum: 301 Jan 16 21:16:27.931560 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 16 21:16:27.931570 kernel: landlock: Up and running. Jan 16 21:16:27.931581 kernel: SELinux: Initializing. Jan 16 21:16:27.931592 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 16 21:16:27.931605 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 16 21:16:27.931619 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 16 21:16:27.931630 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 16 21:16:27.931643 kernel: signal: max sigframe size: 1776 Jan 16 21:16:27.931654 kernel: rcu: Hierarchical SRCU implementation. Jan 16 21:16:27.931665 kernel: rcu: Max phase no-delay instances is 400. Jan 16 21:16:27.931679 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 16 21:16:27.931690 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 16 21:16:27.931700 kernel: smp: Bringing up secondary CPUs ... Jan 16 21:16:27.931716 kernel: smpboot: x86: Booting SMP configuration: Jan 16 21:16:27.931727 kernel: .... node #0, CPUs: #1 #2 #3 Jan 16 21:16:27.931737 kernel: smp: Brought up 1 node, 4 CPUs Jan 16 21:16:27.931748 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 16 21:16:27.931761 kernel: Memory: 2445296K/2571752K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 120520K reserved, 0K cma-reserved) Jan 16 21:16:27.931774 kernel: devtmpfs: initialized Jan 16 21:16:27.931787 kernel: x86/mm: Memory block size: 128MB Jan 16 21:16:27.931804 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 16 21:16:27.931817 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 16 21:16:27.931830 kernel: pinctrl core: initialized pinctrl subsystem Jan 16 21:16:27.931842 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 16 21:16:27.931856 kernel: audit: initializing netlink subsys (disabled) Jan 16 21:16:27.931869 kernel: audit: type=2000 audit(1768598161.811:1): state=initialized audit_enabled=0 res=1 Jan 16 21:16:27.931881 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 16 21:16:27.931898 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 16 21:16:27.931910 kernel: cpuidle: using governor menu Jan 16 21:16:27.931922 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 16 21:16:27.931934 kernel: dca service started, version 1.12.1 Jan 16 21:16:27.931947 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 16 21:16:27.931960 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 16 21:16:27.931973 kernel: PCI: Using configuration type 1 for base access Jan 16 21:16:27.931989 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 16 21:16:27.932001 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 16 21:16:27.932015 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 16 21:16:27.932027 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 16 21:16:27.932039 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 16 21:16:27.932051 kernel: ACPI: Added _OSI(Module Device) Jan 16 21:16:27.932064 kernel: ACPI: Added _OSI(Processor Device) Jan 16 21:16:27.932080 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 16 21:16:27.932092 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 16 21:16:27.932105 kernel: ACPI: Interpreter enabled Jan 16 21:16:27.932117 kernel: ACPI: PM: (supports S0 S3 S5) Jan 16 21:16:27.932130 kernel: ACPI: Using IOAPIC for interrupt routing Jan 16 21:16:27.932143 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 16 21:16:27.932157 kernel: PCI: Using E820 reservations for host bridge windows Jan 16 21:16:27.932174 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 16 21:16:27.932187 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 16 21:16:27.932717 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 16 21:16:27.932979 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 16 21:16:27.933235 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 16 21:16:27.933254 kernel: PCI host bridge to bus 0000:00 Jan 16 21:16:27.933745 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 16 21:16:27.933966 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 16 21:16:27.934182 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 16 21:16:27.934640 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 16 21:16:27.934934 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 16 21:16:27.935154 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 16 21:16:27.935556 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 16 21:16:27.935816 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 16 21:16:27.936252 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 16 21:16:27.936651 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 16 21:16:27.936870 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 16 21:16:27.937085 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 16 21:16:27.937482 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 16 21:16:27.937700 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 34179 usecs Jan 16 21:16:27.937922 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 16 21:16:27.938138 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 16 21:16:27.938510 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 16 21:16:27.938729 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 16 21:16:27.938953 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 16 21:16:27.939164 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 16 21:16:27.939534 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 16 21:16:27.939751 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 16 21:16:27.939984 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 16 21:16:27.940196 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 16 21:16:27.940573 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 16 21:16:27.940817 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 16 21:16:27.941060 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 16 21:16:27.941611 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 16 21:16:27.941874 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 16 21:16:27.942120 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 32226 usecs Jan 16 21:16:27.942663 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 16 21:16:27.942915 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 16 21:16:27.946240 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 16 21:16:27.946683 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 16 21:16:27.946935 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 16 21:16:27.946956 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 16 21:16:27.946969 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 16 21:16:27.946980 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 16 21:16:27.946990 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 16 21:16:27.947002 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 16 21:16:27.947022 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 16 21:16:27.947036 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 16 21:16:27.947047 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 16 21:16:27.947057 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 16 21:16:27.947068 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 16 21:16:27.947080 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 16 21:16:27.947093 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 16 21:16:27.947111 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 16 21:16:27.947122 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 16 21:16:27.947133 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 16 21:16:27.947143 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 16 21:16:27.947157 kernel: iommu: Default domain type: Translated Jan 16 21:16:27.947170 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 16 21:16:27.947183 kernel: PCI: Using ACPI for IRQ routing Jan 16 21:16:27.947198 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 16 21:16:27.947208 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 16 21:16:27.947222 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 16 21:16:27.947678 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 16 21:16:27.947929 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 16 21:16:27.948175 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 16 21:16:27.948192 kernel: vgaarb: loaded Jan 16 21:16:27.948209 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 16 21:16:27.948222 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 16 21:16:27.948235 kernel: clocksource: Switched to clocksource kvm-clock Jan 16 21:16:27.948249 kernel: VFS: Disk quotas dquot_6.6.0 Jan 16 21:16:27.948261 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 16 21:16:27.948271 kernel: pnp: PnP ACPI init Jan 16 21:16:27.948706 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 16 21:16:27.948733 kernel: pnp: PnP ACPI: found 6 devices Jan 16 21:16:27.948745 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 16 21:16:27.948756 kernel: NET: Registered PF_INET protocol family Jan 16 21:16:27.948767 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 16 21:16:27.948781 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 16 21:16:27.948793 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 16 21:16:27.948810 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 16 21:16:27.948821 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 16 21:16:27.948832 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 16 21:16:27.948843 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 16 21:16:27.948857 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 16 21:16:27.948869 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 16 21:16:27.948880 kernel: NET: Registered PF_XDP protocol family Jan 16 21:16:27.949106 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 16 21:16:27.949523 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 16 21:16:27.949752 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 16 21:16:27.949982 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 16 21:16:27.950206 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 16 21:16:27.951851 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 16 21:16:27.951871 kernel: PCI: CLS 0 bytes, default 64 Jan 16 21:16:27.951888 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 16 21:16:27.951902 kernel: Initialise system trusted keyrings Jan 16 21:16:27.951915 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 16 21:16:27.951929 kernel: Key type asymmetric registered Jan 16 21:16:27.951940 kernel: Asymmetric key parser 'x509' registered Jan 16 21:16:27.951951 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 16 21:16:27.951962 kernel: io scheduler mq-deadline registered Jan 16 21:16:27.951977 kernel: io scheduler kyber registered Jan 16 21:16:27.951991 kernel: io scheduler bfq registered Jan 16 21:16:27.952003 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 16 21:16:27.952016 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 16 21:16:27.952027 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 16 21:16:27.952038 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 16 21:16:27.952049 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 16 21:16:27.952062 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 16 21:16:27.952080 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 16 21:16:27.952091 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 16 21:16:27.952102 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 16 21:16:27.952522 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 16 21:16:27.952545 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 16 21:16:27.952783 kernel: rtc_cmos 00:04: registered as rtc0 Jan 16 21:16:27.952805 kernel: hpet: Lost 3 RTC interrupts Jan 16 21:16:27.953036 kernel: rtc_cmos 00:04: setting system clock to 2026-01-16T21:16:18 UTC (1768598178) Jan 16 21:16:27.953273 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 16 21:16:27.953464 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 16 21:16:27.953477 kernel: NET: Registered PF_INET6 protocol family Jan 16 21:16:27.953491 kernel: Segment Routing with IPv6 Jan 16 21:16:27.953502 kernel: In-situ OAM (IOAM) with IPv6 Jan 16 21:16:27.953518 kernel: NET: Registered PF_PACKET protocol family Jan 16 21:16:27.953529 kernel: Key type dns_resolver registered Jan 16 21:16:27.953542 kernel: IPI shorthand broadcast: enabled Jan 16 21:16:27.953555 kernel: sched_clock: Marking stable (13265348042, 799715871)->(17749034830, -3683970917) Jan 16 21:16:27.953568 kernel: registered taskstats version 1 Jan 16 21:16:27.953580 kernel: Loading compiled-in X.509 certificates Jan 16 21:16:27.953590 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: a9591db9912320a48a0589d0293fff3e535b90df' Jan 16 21:16:27.953605 kernel: Demotion targets for Node 0: null Jan 16 21:16:27.953618 kernel: Key type .fscrypt registered Jan 16 21:16:27.953631 kernel: Key type fscrypt-provisioning registered Jan 16 21:16:27.953644 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 16 21:16:27.953654 kernel: ima: Allocated hash algorithm: sha1 Jan 16 21:16:27.953665 kernel: ima: No architecture policies found Jan 16 21:16:27.953676 kernel: clk: Disabling unused clocks Jan 16 21:16:27.953694 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 16 21:16:27.953708 kernel: Write protecting the kernel read-only data: 47104k Jan 16 21:16:27.953719 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 16 21:16:27.953729 kernel: Run /init as init process Jan 16 21:16:27.953740 kernel: with arguments: Jan 16 21:16:27.953753 kernel: /init Jan 16 21:16:27.953765 kernel: with environment: Jan 16 21:16:27.953782 kernel: HOME=/ Jan 16 21:16:27.953793 kernel: TERM=linux Jan 16 21:16:27.953803 kernel: SCSI subsystem initialized Jan 16 21:16:27.953814 kernel: libata version 3.00 loaded. Jan 16 21:16:27.954063 kernel: ahci 0000:00:1f.2: version 3.0 Jan 16 21:16:27.954084 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 16 21:16:27.958521 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 16 21:16:27.958782 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 16 21:16:27.959030 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 16 21:16:27.959483 kernel: scsi host0: ahci Jan 16 21:16:27.959777 kernel: scsi host1: ahci Jan 16 21:16:27.960044 kernel: scsi host2: ahci Jan 16 21:16:27.960529 kernel: scsi host3: ahci Jan 16 21:16:27.960848 kernel: scsi host4: ahci Jan 16 21:16:27.961179 kernel: scsi host5: ahci Jan 16 21:16:27.961202 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Jan 16 21:16:27.961215 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Jan 16 21:16:27.961226 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Jan 16 21:16:27.961245 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Jan 16 21:16:27.961259 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Jan 16 21:16:27.961274 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Jan 16 21:16:27.962490 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 16 21:16:27.962507 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 16 21:16:27.962521 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 16 21:16:27.962535 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 16 21:16:27.962552 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 16 21:16:27.962563 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 16 21:16:27.962574 kernel: ata3.00: LPM support broken, forcing max_power Jan 16 21:16:27.962588 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 16 21:16:27.962602 kernel: ata3.00: applying bridge limits Jan 16 21:16:27.962613 kernel: ata3.00: LPM support broken, forcing max_power Jan 16 21:16:27.962624 kernel: ata3.00: configured for UDMA/100 Jan 16 21:16:27.962928 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 16 21:16:27.963203 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 16 21:16:27.963650 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 16 21:16:27.963673 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 16 21:16:27.963917 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 16 21:16:27.963937 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 16 21:16:27.964215 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 16 21:16:27.964235 kernel: GPT:16515071 != 27000831 Jan 16 21:16:27.964250 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 16 21:16:27.964261 kernel: GPT:16515071 != 27000831 Jan 16 21:16:27.964272 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 16 21:16:27.964452 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 16 21:16:27.964467 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 16 21:16:27.964483 kernel: device-mapper: uevent: version 1.0.3 Jan 16 21:16:27.964495 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 16 21:16:27.964507 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 16 21:16:27.964521 kernel: raid6: avx2x4 gen() 14437 MB/s Jan 16 21:16:27.964533 kernel: raid6: avx2x2 gen() 14676 MB/s Jan 16 21:16:27.964544 kernel: raid6: avx2x1 gen() 8664 MB/s Jan 16 21:16:27.964554 kernel: raid6: using algorithm avx2x2 gen() 14676 MB/s Jan 16 21:16:27.964572 kernel: raid6: .... xor() 13834 MB/s, rmw enabled Jan 16 21:16:27.964586 kernel: raid6: using avx2x2 recovery algorithm Jan 16 21:16:27.964602 kernel: xor: automatically using best checksumming function avx Jan 16 21:16:27.964617 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 16 21:16:27.964629 kernel: BTRFS: device fsid a5f82c06-1ff1-43b3-a650-214802f1359b devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (181) Jan 16 21:16:27.964643 kernel: BTRFS info (device dm-0): first mount of filesystem a5f82c06-1ff1-43b3-a650-214802f1359b Jan 16 21:16:27.964657 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 16 21:16:27.964670 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 16 21:16:27.964685 kernel: BTRFS info (device dm-0): enabling free space tree Jan 16 21:16:27.964700 kernel: loop: module loaded Jan 16 21:16:27.964711 kernel: loop0: detected capacity change from 0 to 100536 Jan 16 21:16:27.964722 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 16 21:16:27.964742 systemd[1]: Successfully made /usr/ read-only. Jan 16 21:16:27.964759 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 16 21:16:27.964773 systemd[1]: Detected virtualization kvm. Jan 16 21:16:27.964784 systemd[1]: Detected architecture x86-64. Jan 16 21:16:27.964796 systemd[1]: Running in initrd. Jan 16 21:16:27.964813 systemd[1]: No hostname configured, using default hostname. Jan 16 21:16:27.964827 systemd[1]: Hostname set to . Jan 16 21:16:27.964841 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 16 21:16:27.964854 systemd[1]: Queued start job for default target initrd.target. Jan 16 21:16:27.964866 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 16 21:16:27.964877 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 21:16:27.964891 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 21:16:27.964910 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 16 21:16:27.964925 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 21:16:27.964938 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 16 21:16:27.964950 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 16 21:16:27.964962 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 21:16:27.964980 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 21:16:27.964994 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 16 21:16:27.965007 systemd[1]: Reached target paths.target - Path Units. Jan 16 21:16:27.965019 systemd[1]: Reached target slices.target - Slice Units. Jan 16 21:16:27.965030 systemd[1]: Reached target swap.target - Swaps. Jan 16 21:16:27.965043 systemd[1]: Reached target timers.target - Timer Units. Jan 16 21:16:27.965058 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 21:16:27.965077 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 21:16:27.965089 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 16 21:16:27.965100 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 16 21:16:27.965112 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 16 21:16:27.965126 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 21:16:27.965140 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 21:16:27.965155 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 21:16:27.965171 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 21:16:27.965183 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 16 21:16:27.965195 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 16 21:16:27.965210 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 21:16:27.965224 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 16 21:16:27.965239 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 16 21:16:27.965255 systemd[1]: Starting systemd-fsck-usr.service... Jan 16 21:16:27.965266 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 21:16:27.965443 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 21:16:27.965460 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 21:16:27.965481 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 16 21:16:27.965496 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 21:16:27.965541 systemd-journald[320]: Collecting audit messages is enabled. Jan 16 21:16:27.965577 systemd[1]: Finished systemd-fsck-usr.service. Jan 16 21:16:27.965594 kernel: audit: type=1130 audit(1768598187.935:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:27.965607 kernel: audit: type=1130 audit(1768598187.960:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:27.965619 systemd-journald[320]: Journal started Jan 16 21:16:27.965643 systemd-journald[320]: Runtime Journal (/run/log/journal/a069cb602e4d41358a3ed9dcf66822ee) is 6M, max 48.2M, 42.1M free. Jan 16 21:16:27.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:27.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:28.005144 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 21:16:28.005183 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 21:16:28.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:28.030478 kernel: audit: type=1130 audit(1768598188.012:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:28.048582 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 16 21:16:28.051714 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 21:16:28.545883 kernel: Bridge firewalling registered Jan 16 21:16:28.054711 systemd-modules-load[322]: Inserted module 'br_netfilter' Jan 16 21:16:28.557091 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 21:16:28.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:28.579557 kernel: audit: type=1130 audit(1768598188.556:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:28.604933 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 21:16:28.644132 kernel: audit: type=1130 audit(1768598188.613:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:28.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:28.646240 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 21:16:28.702633 kernel: audit: type=1130 audit(1768598188.661:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:28.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:28.692668 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 21:16:28.704178 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 21:16:28.736804 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 21:16:28.775140 systemd-tmpfiles[333]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 16 21:16:28.798645 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 21:16:28.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:28.846698 kernel: audit: type=1130 audit(1768598188.821:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:28.847917 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 21:16:28.897973 kernel: audit: type=1130 audit(1768598188.847:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:28.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:28.848875 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 21:16:28.935968 kernel: audit: type=1130 audit(1768598188.897:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:28.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:28.936192 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 21:16:28.974151 kernel: audit: type=1130 audit(1768598188.936:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:28.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:28.980669 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 16 21:16:28.991000 audit: BPF prog-id=6 op=LOAD Jan 16 21:16:28.999547 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 21:16:29.070263 dracut-cmdline[356]: dracut-109 Jan 16 21:16:29.086159 dracut-cmdline[356]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e880b5400e832e1de59b993d9ba6b86a9089175f10b4985da8b7b47cc8c74099 Jan 16 21:16:29.183897 systemd-resolved[357]: Positive Trust Anchors: Jan 16 21:16:29.183988 systemd-resolved[357]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 21:16:29.183994 systemd-resolved[357]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 16 21:16:29.184037 systemd-resolved[357]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 21:16:29.224978 systemd-resolved[357]: Defaulting to hostname 'linux'. Jan 16 21:16:29.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:29.228899 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 21:16:29.282900 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 21:16:29.473687 kernel: Loading iSCSI transport class v2.0-870. Jan 16 21:16:29.510627 kernel: iscsi: registered transport (tcp) Jan 16 21:16:29.555469 kernel: iscsi: registered transport (qla4xxx) Jan 16 21:16:29.555530 kernel: QLogic iSCSI HBA Driver Jan 16 21:16:29.635271 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 16 21:16:29.705762 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 16 21:16:29.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:29.710740 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 16 21:16:29.890861 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 16 21:16:29.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:29.908677 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 16 21:16:29.910584 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 16 21:16:30.022572 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 16 21:16:30.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:30.024000 audit: BPF prog-id=7 op=LOAD Jan 16 21:16:30.024000 audit: BPF prog-id=8 op=LOAD Jan 16 21:16:30.026001 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 21:16:30.130223 systemd-udevd[580]: Using default interface naming scheme 'v257'. Jan 16 21:16:30.158997 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 21:16:30.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:30.173160 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 16 21:16:30.263722 dracut-pre-trigger[620]: rd.md=0: removing MD RAID activation Jan 16 21:16:30.370974 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 21:16:30.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:30.383733 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 21:16:30.413799 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 21:16:30.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:30.438000 audit: BPF prog-id=9 op=LOAD Jan 16 21:16:30.441721 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 21:16:30.559694 systemd-networkd[723]: lo: Link UP Jan 16 21:16:30.559756 systemd-networkd[723]: lo: Gained carrier Jan 16 21:16:30.571824 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 21:16:30.580676 systemd[1]: Reached target network.target - Network. Jan 16 21:16:30.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:30.605589 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 21:16:30.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:30.635611 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 16 21:16:30.722134 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 16 21:16:30.763685 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 16 21:16:30.826995 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 16 21:16:30.863632 kernel: cryptd: max_cpu_qlen set to 1000 Jan 16 21:16:30.880713 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 16 21:16:30.922716 kernel: AES CTR mode by8 optimization enabled Jan 16 21:16:30.908223 systemd-networkd[723]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 16 21:16:30.908231 systemd-networkd[723]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 21:16:30.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:30.909087 systemd-networkd[723]: eth0: Link UP Jan 16 21:16:30.917202 systemd-networkd[723]: eth0: Gained carrier Jan 16 21:16:30.917217 systemd-networkd[723]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 16 21:16:30.941716 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 16 21:16:30.942766 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 21:16:30.943082 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 21:16:30.955670 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 21:16:30.970642 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 21:16:31.049551 systemd-networkd[723]: eth0: DHCPv4 address 10.0.0.34/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 16 21:16:31.103801 disk-uuid[823]: Primary Header is updated. Jan 16 21:16:31.103801 disk-uuid[823]: Secondary Entries is updated. Jan 16 21:16:31.103801 disk-uuid[823]: Secondary Header is updated. Jan 16 21:16:31.134527 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 16 21:16:31.124014 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 16 21:16:31.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:31.143652 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 21:16:31.148205 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 21:16:31.148275 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 21:16:31.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:31.188669 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 16 21:16:31.790681 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 21:16:31.861564 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 16 21:16:31.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:32.261528 disk-uuid[832]: Warning: The kernel is still using the old partition table. Jan 16 21:16:32.261528 disk-uuid[832]: The new table will be used at the next reboot or after you Jan 16 21:16:32.261528 disk-uuid[832]: run partprobe(8) or kpartx(8) Jan 16 21:16:32.261528 disk-uuid[832]: The operation has completed successfully. Jan 16 21:16:32.303266 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 16 21:16:32.303824 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 16 21:16:32.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:32.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:32.314761 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 16 21:16:32.437721 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (857) Jan 16 21:16:32.454015 kernel: BTRFS info (device vda6): first mount of filesystem 984b7cbf-e15c-4ac8-8ab0-1fb2c55516eb Jan 16 21:16:32.454089 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 16 21:16:32.490214 kernel: BTRFS info (device vda6): turning on async discard Jan 16 21:16:32.490543 kernel: BTRFS info (device vda6): enabling free space tree Jan 16 21:16:32.519041 systemd-networkd[723]: eth0: Gained IPv6LL Jan 16 21:16:32.535855 kernel: BTRFS info (device vda6): last unmount of filesystem 984b7cbf-e15c-4ac8-8ab0-1fb2c55516eb Jan 16 21:16:32.549180 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 16 21:16:32.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:32.570252 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 16 21:16:32.837988 ignition[876]: Ignition 2.24.0 Jan 16 21:16:32.838066 ignition[876]: Stage: fetch-offline Jan 16 21:16:32.838130 ignition[876]: no configs at "/usr/lib/ignition/base.d" Jan 16 21:16:32.838662 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 16 21:16:32.866637 ignition[876]: parsed url from cmdline: "" Jan 16 21:16:32.866704 ignition[876]: no config URL provided Jan 16 21:16:32.867104 ignition[876]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 21:16:32.867131 ignition[876]: no config at "/usr/lib/ignition/user.ign" Jan 16 21:16:32.867197 ignition[876]: op(1): [started] loading QEMU firmware config module Jan 16 21:16:32.867205 ignition[876]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 16 21:16:32.894187 ignition[876]: op(1): [finished] loading QEMU firmware config module Jan 16 21:16:33.764146 ignition[876]: parsing config with SHA512: 8a1ba87590be5cdd7da4dae571cc06ece9c4be3bb27786f1c19e4b558eafcb584cf9887a6ed5c2da5749c06b4e2eb817ff54c6d305d444ed7d28bdfd3e8df929 Jan 16 21:16:33.799599 unknown[876]: fetched base config from "system" Jan 16 21:16:33.799615 unknown[876]: fetched user config from "qemu" Jan 16 21:16:33.802527 ignition[876]: fetch-offline: fetch-offline passed Jan 16 21:16:33.879806 kernel: kauditd_printk_skb: 20 callbacks suppressed Jan 16 21:16:33.879843 kernel: audit: type=1130 audit(1768598193.821:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:33.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:33.820964 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 21:16:33.802717 ignition[876]: Ignition finished successfully Jan 16 21:16:33.824199 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 16 21:16:33.826662 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 16 21:16:33.968676 ignition[886]: Ignition 2.24.0 Jan 16 21:16:33.968765 ignition[886]: Stage: kargs Jan 16 21:16:33.972795 ignition[886]: no configs at "/usr/lib/ignition/base.d" Jan 16 21:16:33.972809 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 16 21:16:33.994828 ignition[886]: kargs: kargs passed Jan 16 21:16:33.994927 ignition[886]: Ignition finished successfully Jan 16 21:16:34.020211 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 16 21:16:34.059659 kernel: audit: type=1130 audit(1768598194.020:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:34.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:34.022899 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 16 21:16:34.194939 ignition[893]: Ignition 2.24.0 Jan 16 21:16:34.195023 ignition[893]: Stage: disks Jan 16 21:16:34.195254 ignition[893]: no configs at "/usr/lib/ignition/base.d" Jan 16 21:16:34.195274 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 16 21:16:34.196716 ignition[893]: disks: disks passed Jan 16 21:16:34.196771 ignition[893]: Ignition finished successfully Jan 16 21:16:34.240976 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 16 21:16:34.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:34.262078 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 16 21:16:34.304638 kernel: audit: type=1130 audit(1768598194.261:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:34.290642 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 16 21:16:34.309599 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 21:16:34.331247 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 21:16:34.355917 systemd[1]: Reached target basic.target - Basic System. Jan 16 21:16:34.358803 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 16 21:16:34.471507 systemd-fsck[902]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 16 21:16:34.488067 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 16 21:16:34.535815 kernel: audit: type=1130 audit(1768598194.488:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:34.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:34.492900 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 16 21:16:35.150690 kernel: EXT4-fs (vda9): mounted filesystem ec5ae8d3-548b-4a34-bd68-b1a953fcffb6 r/w with ordered data mode. Quota mode: none. Jan 16 21:16:35.155019 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 16 21:16:35.166696 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 16 21:16:35.195769 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 21:16:35.199711 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 16 21:16:35.212864 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 16 21:16:35.212934 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 16 21:16:35.212978 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 21:16:35.292037 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 16 21:16:35.346632 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (910) Jan 16 21:16:35.346658 kernel: BTRFS info (device vda6): first mount of filesystem 984b7cbf-e15c-4ac8-8ab0-1fb2c55516eb Jan 16 21:16:35.346669 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 16 21:16:35.306224 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 16 21:16:35.388604 kernel: BTRFS info (device vda6): turning on async discard Jan 16 21:16:35.388671 kernel: BTRFS info (device vda6): enabling free space tree Jan 16 21:16:35.391606 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 21:16:35.852673 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 16 21:16:35.892269 kernel: audit: type=1130 audit(1768598195.852:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:35.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:35.854729 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 16 21:16:35.918726 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 16 21:16:35.956530 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 16 21:16:35.979690 kernel: BTRFS info (device vda6): last unmount of filesystem 984b7cbf-e15c-4ac8-8ab0-1fb2c55516eb Jan 16 21:16:36.009180 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 16 21:16:36.040242 kernel: audit: type=1130 audit(1768598196.009:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:36.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:36.048810 ignition[1008]: INFO : Ignition 2.24.0 Jan 16 21:16:36.048810 ignition[1008]: INFO : Stage: mount Jan 16 21:16:36.060217 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 21:16:36.060217 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 16 21:16:36.084039 ignition[1008]: INFO : mount: mount passed Jan 16 21:16:36.084039 ignition[1008]: INFO : Ignition finished successfully Jan 16 21:16:36.102604 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 16 21:16:36.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:36.120859 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 16 21:16:36.159161 kernel: audit: type=1130 audit(1768598196.117:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:36.182133 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 21:16:36.241852 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1019) Jan 16 21:16:36.256882 kernel: BTRFS info (device vda6): first mount of filesystem 984b7cbf-e15c-4ac8-8ab0-1fb2c55516eb Jan 16 21:16:36.257041 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 16 21:16:36.289608 kernel: BTRFS info (device vda6): turning on async discard Jan 16 21:16:36.289680 kernel: BTRFS info (device vda6): enabling free space tree Jan 16 21:16:36.295732 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 21:16:36.383910 ignition[1036]: INFO : Ignition 2.24.0 Jan 16 21:16:36.383910 ignition[1036]: INFO : Stage: files Jan 16 21:16:36.383910 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 21:16:36.383910 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 16 21:16:36.428994 ignition[1036]: DEBUG : files: compiled without relabeling support, skipping Jan 16 21:16:36.428994 ignition[1036]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 16 21:16:36.428994 ignition[1036]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 16 21:16:36.428994 ignition[1036]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 16 21:16:36.428994 ignition[1036]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 16 21:16:36.428994 ignition[1036]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 16 21:16:36.428994 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 16 21:16:36.428994 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 16 21:16:36.408013 unknown[1036]: wrote ssh authorized keys file for user: core Jan 16 21:16:36.554861 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 16 21:16:36.659234 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 16 21:16:36.659234 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 16 21:16:36.687539 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 16 21:16:36.687539 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 16 21:16:36.687539 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 16 21:16:36.687539 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 21:16:36.687539 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 21:16:36.687539 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 21:16:36.687539 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 21:16:36.687539 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 21:16:36.687539 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 21:16:36.687539 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 16 21:16:36.687539 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 16 21:16:36.687539 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 16 21:16:36.687539 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 16 21:16:37.031230 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 16 21:16:40.463737 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 16 21:16:40.463737 ignition[1036]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 16 21:16:40.501021 ignition[1036]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 21:16:40.528085 ignition[1036]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 21:16:40.528085 ignition[1036]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 16 21:16:40.528085 ignition[1036]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 16 21:16:40.528085 ignition[1036]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 16 21:16:40.528085 ignition[1036]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 16 21:16:40.528085 ignition[1036]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 16 21:16:40.528085 ignition[1036]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 16 21:16:40.738503 ignition[1036]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 16 21:16:40.754247 ignition[1036]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 16 21:16:40.754247 ignition[1036]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 16 21:16:40.754247 ignition[1036]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 16 21:16:40.754247 ignition[1036]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 16 21:16:40.754247 ignition[1036]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 16 21:16:40.754247 ignition[1036]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 16 21:16:40.754247 ignition[1036]: INFO : files: files passed Jan 16 21:16:40.887787 kernel: audit: type=1130 audit(1768598200.859:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:40.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:40.887916 ignition[1036]: INFO : Ignition finished successfully Jan 16 21:16:40.824626 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 16 21:16:40.864749 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 16 21:16:40.903917 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 16 21:16:40.961629 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 16 21:16:40.972259 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 16 21:16:40.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:41.003017 initrd-setup-root-after-ignition[1067]: grep: /sysroot/oem/oem-release: No such file or directory Jan 16 21:16:41.070902 kernel: audit: type=1130 audit(1768598200.991:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:41.070939 kernel: audit: type=1131 audit(1768598200.991:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:40.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:41.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:41.027052 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 21:16:41.090092 kernel: audit: type=1130 audit(1768598201.048:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:41.090140 initrd-setup-root-after-ignition[1069]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 21:16:41.090140 initrd-setup-root-after-ignition[1069]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 16 21:16:41.049748 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 16 21:16:41.151600 initrd-setup-root-after-ignition[1073]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 21:16:41.147679 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 16 21:16:41.335813 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 16 21:16:41.345261 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 16 21:16:41.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:41.373023 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 16 21:16:41.438170 kernel: audit: type=1130 audit(1768598201.369:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:41.438233 kernel: audit: type=1131 audit(1768598201.369:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:41.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:41.413776 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 16 21:16:41.460227 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 16 21:16:41.478217 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 16 21:16:41.596064 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 21:16:41.657243 kernel: audit: type=1130 audit(1768598201.610:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:41.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:41.614585 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 16 21:16:41.722919 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 16 21:16:41.723214 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 16 21:16:41.739177 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 21:16:41.821935 kernel: audit: type=1131 audit(1768598201.780:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:41.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:41.767137 systemd[1]: Stopped target timers.target - Timer Units. Jan 16 21:16:41.775209 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 16 21:16:41.775833 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 21:16:41.830249 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 16 21:16:41.842795 systemd[1]: Stopped target basic.target - Basic System. Jan 16 21:16:41.868667 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 16 21:16:41.883232 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 21:16:41.890025 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 16 21:16:41.906911 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 16 21:16:41.934626 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 16 21:16:41.970522 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 21:16:42.002173 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 16 21:16:42.026752 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 16 21:16:42.034614 systemd[1]: Stopped target swap.target - Swaps. Jan 16 21:16:42.089456 kernel: audit: type=1131 audit(1768598202.052:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:42.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:42.034799 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 16 21:16:42.035054 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 16 21:16:42.089949 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 16 21:16:42.099892 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 21:16:42.138684 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 16 21:16:42.147728 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 21:16:42.176218 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 16 21:16:42.233097 kernel: audit: type=1131 audit(1768598202.187:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:42.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:42.176689 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 16 21:16:42.233584 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 16 21:16:42.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:42.233793 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 21:16:42.244834 systemd[1]: Stopped target paths.target - Path Units. Jan 16 21:16:42.269656 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 16 21:16:42.276140 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 21:16:42.291675 systemd[1]: Stopped target slices.target - Slice Units. Jan 16 21:16:42.311160 systemd[1]: Stopped target sockets.target - Socket Units. Jan 16 21:16:42.327540 systemd[1]: iscsid.socket: Deactivated successfully. Jan 16 21:16:42.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:42.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:42.327697 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 21:16:42.340991 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 16 21:16:42.341210 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 21:16:42.352933 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 16 21:16:42.353060 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 16 21:16:42.394177 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 16 21:16:42.394603 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 21:16:42.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:42.428991 systemd[1]: ignition-files.service: Deactivated successfully. Jan 16 21:16:42.429167 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 16 21:16:42.446456 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 16 21:16:42.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:42.486268 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 16 21:16:42.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:42.496837 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 16 21:16:42.497193 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 21:16:42.542263 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 16 21:16:42.650826 ignition[1093]: INFO : Ignition 2.24.0 Jan 16 21:16:42.650826 ignition[1093]: INFO : Stage: umount Jan 16 21:16:42.650826 ignition[1093]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 21:16:42.650826 ignition[1093]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 16 21:16:42.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:42.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:42.542902 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 21:16:42.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:42.734146 ignition[1093]: INFO : umount: umount passed Jan 16 21:16:42.734146 ignition[1093]: INFO : Ignition finished successfully Jan 16 21:16:42.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:42.566706 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 16 21:16:42.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:42.566969 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 21:16:42.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:42.655917 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 16 21:16:42.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:42.656487 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 16 21:16:42.676551 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 16 21:16:42.677151 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 16 21:16:42.729760 systemd[1]: Stopped target network.target - Network. Jan 16 21:16:42.747729 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 16 21:16:42.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:42.747910 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 16 21:16:42.755603 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 16 21:16:42.755697 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 16 21:16:42.938000 audit: BPF prog-id=6 op=UNLOAD Jan 16 21:16:42.784134 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 16 21:16:42.784247 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 16 21:16:42.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:42.801917 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 16 21:16:42.802006 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 16 21:16:42.812087 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 16 21:16:42.833791 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 16 21:16:42.882622 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 16 21:16:43.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:42.882855 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 16 21:16:43.051000 audit: BPF prog-id=9 op=UNLOAD Jan 16 21:16:42.937157 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 16 21:16:42.948505 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 16 21:16:43.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:42.949009 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 16 21:16:43.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:43.028684 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 16 21:16:43.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:43.029653 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 16 21:16:43.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:43.049772 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 16 21:16:43.069840 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 16 21:16:43.070117 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 16 21:16:43.086535 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 16 21:16:43.086641 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 16 21:16:43.138592 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 16 21:16:43.160557 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 16 21:16:43.160694 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 21:16:43.161001 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 16 21:16:43.161072 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 16 21:16:43.194110 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 16 21:16:43.194212 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 16 21:16:43.223201 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 21:16:43.395262 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 16 21:16:43.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:43.396150 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 21:16:43.414731 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 16 21:16:43.414900 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 16 21:16:43.433579 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 16 21:16:43.433713 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 21:16:43.510649 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 16 21:16:43.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:43.510850 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 16 21:16:43.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:43.541117 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 16 21:16:43.541235 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 16 21:16:43.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:43.563142 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 21:16:43.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:43.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:43.563245 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 21:16:43.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:43.596966 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 16 21:16:43.608906 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 16 21:16:43.609026 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 16 21:16:43.629993 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 16 21:16:43.630106 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 21:16:43.637669 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 16 21:16:43.637760 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 21:16:43.659152 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 16 21:16:43.659252 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 21:16:43.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:43.802113 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 21:16:43.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:43.802555 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 21:16:43.845005 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 16 21:16:43.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:43.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:43.845239 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 16 21:16:43.879954 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 16 21:16:43.883218 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 16 21:16:43.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:43.907785 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 16 21:16:43.941031 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 16 21:16:44.004115 systemd[1]: Switching root. Jan 16 21:16:44.066960 systemd-journald[320]: Journal stopped Jan 16 21:16:48.569795 systemd-journald[320]: Received SIGTERM from PID 1 (systemd). Jan 16 21:16:48.570028 kernel: SELinux: policy capability network_peer_controls=1 Jan 16 21:16:48.570061 kernel: SELinux: policy capability open_perms=1 Jan 16 21:16:48.570085 kernel: SELinux: policy capability extended_socket_class=1 Jan 16 21:16:48.570110 kernel: SELinux: policy capability always_check_network=0 Jan 16 21:16:48.570128 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 16 21:16:48.570155 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 16 21:16:48.570173 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 16 21:16:48.570196 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 16 21:16:48.570213 kernel: SELinux: policy capability userspace_initial_context=0 Jan 16 21:16:48.570238 systemd[1]: Successfully loaded SELinux policy in 603.754ms. Jan 16 21:16:48.570269 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 17.331ms. Jan 16 21:16:48.570501 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 16 21:16:48.570523 systemd[1]: Detected virtualization kvm. Jan 16 21:16:48.570548 systemd[1]: Detected architecture x86-64. Jan 16 21:16:48.570565 systemd[1]: Detected first boot. Jan 16 21:16:48.570584 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 16 21:16:48.570606 zram_generator::config[1137]: No configuration found. Jan 16 21:16:48.570631 kernel: Guest personality initialized and is inactive Jan 16 21:16:48.570650 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 16 21:16:48.570755 kernel: Initialized host personality Jan 16 21:16:48.570772 kernel: NET: Registered PF_VSOCK protocol family Jan 16 21:16:48.570789 systemd[1]: Populated /etc with preset unit settings. Jan 16 21:16:48.570806 kernel: kauditd_printk_skb: 40 callbacks suppressed Jan 16 21:16:48.570823 kernel: audit: type=1334 audit(1768598206.937:89): prog-id=12 op=LOAD Jan 16 21:16:48.570835 kernel: audit: type=1334 audit(1768598206.937:90): prog-id=3 op=UNLOAD Jan 16 21:16:48.570845 kernel: audit: type=1334 audit(1768598206.937:91): prog-id=13 op=LOAD Jan 16 21:16:48.570859 kernel: audit: type=1334 audit(1768598206.937:92): prog-id=14 op=LOAD Jan 16 21:16:48.570938 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 16 21:16:48.570957 kernel: audit: type=1334 audit(1768598206.938:93): prog-id=4 op=UNLOAD Jan 16 21:16:48.570973 kernel: audit: type=1334 audit(1768598206.938:94): prog-id=5 op=UNLOAD Jan 16 21:16:48.570992 kernel: audit: type=1131 audit(1768598206.942:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:48.571010 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 16 21:16:48.571029 kernel: audit: type=1130 audit(1768598207.020:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:48.571052 kernel: audit: type=1131 audit(1768598207.021:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:48.571071 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 16 21:16:48.571091 kernel: audit: type=1334 audit(1768598207.058:98): prog-id=12 op=UNLOAD Jan 16 21:16:48.571115 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 16 21:16:48.571135 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 16 21:16:48.571162 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 16 21:16:48.571184 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 16 21:16:48.571496 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 16 21:16:48.571517 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 16 21:16:48.571536 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 16 21:16:48.571556 systemd[1]: Created slice user.slice - User and Session Slice. Jan 16 21:16:48.571573 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 21:16:48.571597 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 21:16:48.571618 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 16 21:16:48.571636 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 16 21:16:48.571654 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 16 21:16:48.571671 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 21:16:48.571689 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 16 21:16:48.571709 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 21:16:48.571733 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 21:16:48.571750 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 16 21:16:48.571769 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 16 21:16:48.571788 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 16 21:16:48.571806 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 16 21:16:48.571823 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 21:16:48.571841 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 21:16:48.571864 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 16 21:16:48.571882 systemd[1]: Reached target slices.target - Slice Units. Jan 16 21:16:48.571997 systemd[1]: Reached target swap.target - Swaps. Jan 16 21:16:48.572018 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 16 21:16:48.572038 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 16 21:16:48.572058 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 16 21:16:48.572077 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 16 21:16:48.572098 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 16 21:16:48.572122 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 21:16:48.572142 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 16 21:16:48.572162 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 16 21:16:48.572181 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 21:16:48.572200 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 21:16:48.572220 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 16 21:16:48.572239 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 16 21:16:48.572269 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 16 21:16:48.572505 systemd[1]: Mounting media.mount - External Media Directory... Jan 16 21:16:48.572526 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 21:16:48.572546 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 16 21:16:48.572565 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 16 21:16:48.572584 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 16 21:16:48.572604 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 16 21:16:48.572628 systemd[1]: Reached target machines.target - Containers. Jan 16 21:16:48.572742 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 16 21:16:48.572764 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 21:16:48.572783 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 21:16:48.572801 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 16 21:16:48.572818 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 21:16:48.572840 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 21:16:48.572859 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 21:16:48.572878 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 16 21:16:48.572897 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 21:16:48.572915 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 16 21:16:48.572933 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 16 21:16:48.572952 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 16 21:16:48.572974 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 16 21:16:48.572993 systemd[1]: Stopped systemd-fsck-usr.service. Jan 16 21:16:48.573012 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 16 21:16:48.573032 kernel: fuse: init (API version 7.41) Jan 16 21:16:48.573053 kernel: ACPI: bus type drm_connector registered Jan 16 21:16:48.573072 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 21:16:48.573090 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 21:16:48.573108 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 16 21:16:48.573127 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 16 21:16:48.573257 systemd-journald[1224]: Collecting audit messages is enabled. Jan 16 21:16:48.573523 systemd-journald[1224]: Journal started Jan 16 21:16:48.573558 systemd-journald[1224]: Runtime Journal (/run/log/journal/a069cb602e4d41358a3ed9dcf66822ee) is 6M, max 48.2M, 42.1M free. Jan 16 21:16:47.718000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 16 21:16:48.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:48.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:48.432000 audit: BPF prog-id=14 op=UNLOAD Jan 16 21:16:48.433000 audit: BPF prog-id=13 op=UNLOAD Jan 16 21:16:48.458000 audit: BPF prog-id=15 op=LOAD Jan 16 21:16:48.464000 audit: BPF prog-id=16 op=LOAD Jan 16 21:16:48.465000 audit: BPF prog-id=17 op=LOAD Jan 16 21:16:48.566000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 16 21:16:48.566000 audit[1224]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffccbe762e0 a2=4000 a3=0 items=0 ppid=1 pid=1224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:16:48.566000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 16 21:16:46.919078 systemd[1]: Queued start job for default target multi-user.target. Jan 16 21:16:46.939804 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 16 21:16:46.941169 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 16 21:16:46.942695 systemd[1]: systemd-journald.service: Consumed 2.425s CPU time. Jan 16 21:16:48.594784 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 16 21:16:48.605764 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 21:16:48.634795 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 21:16:48.645570 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 21:16:48.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:48.660193 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 16 21:16:48.668998 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 16 21:16:48.677538 systemd[1]: Mounted media.mount - External Media Directory. Jan 16 21:16:48.685541 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 16 21:16:48.693954 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 16 21:16:48.702695 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 16 21:16:48.711686 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 16 21:16:48.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:48.722989 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 21:16:48.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:48.734843 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 16 21:16:48.735187 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 16 21:16:48.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:48.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:48.745652 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 21:16:48.745920 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 21:16:48.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:48.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:48.755119 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 21:16:48.755801 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 21:16:48.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:48.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:48.767193 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 21:16:48.767716 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 21:16:48.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:48.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:48.780165 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 16 21:16:48.780701 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 16 21:16:48.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:48.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:48.793112 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 21:16:48.794000 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 21:16:48.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:48.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:48.805130 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 21:16:48.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:48.817241 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 16 21:16:48.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:48.831794 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 16 21:16:48.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:48.845271 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 16 21:16:48.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:48.860016 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 21:16:48.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:48.892750 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 16 21:16:48.903775 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 16 21:16:48.918854 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 16 21:16:48.930039 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 16 21:16:48.940845 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 16 21:16:48.941043 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 21:16:48.951176 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 16 21:16:48.963669 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 21:16:48.963889 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 16 21:16:48.966914 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 16 21:16:48.981009 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 16 21:16:48.993003 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 21:16:49.012747 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 16 21:16:49.023644 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 21:16:49.025820 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 21:16:49.039691 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 16 21:16:49.051978 systemd-journald[1224]: Time spent on flushing to /var/log/journal/a069cb602e4d41358a3ed9dcf66822ee is 80.723ms for 1118 entries. Jan 16 21:16:49.051978 systemd-journald[1224]: System Journal (/var/log/journal/a069cb602e4d41358a3ed9dcf66822ee) is 8M, max 163.5M, 155.5M free. Jan 16 21:16:49.191939 systemd-journald[1224]: Received client request to flush runtime journal. Jan 16 21:16:49.191999 kernel: loop1: detected capacity change from 0 to 50784 Jan 16 21:16:49.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:49.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:49.054035 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 21:16:49.078224 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 16 21:16:49.091237 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 16 21:16:49.112855 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 16 21:16:49.135203 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 16 21:16:49.149767 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 16 21:16:49.176808 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 21:16:49.180657 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Jan 16 21:16:49.180675 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Jan 16 21:16:49.190919 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 21:16:49.208717 kernel: loop2: detected capacity change from 0 to 111560 Jan 16 21:16:49.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:49.212040 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 16 21:16:49.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:49.238882 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 16 21:16:49.266093 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 16 21:16:49.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:49.300607 kernel: loop3: detected capacity change from 0 to 219144 Jan 16 21:16:49.358626 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 16 21:16:49.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:49.376000 audit: BPF prog-id=18 op=LOAD Jan 16 21:16:49.376000 audit: BPF prog-id=19 op=LOAD Jan 16 21:16:49.376000 audit: BPF prog-id=20 op=LOAD Jan 16 21:16:49.378727 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 16 21:16:49.384512 kernel: loop4: detected capacity change from 0 to 50784 Jan 16 21:16:49.398000 audit: BPF prog-id=21 op=LOAD Jan 16 21:16:49.401993 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 21:16:49.421103 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 21:16:49.438000 audit: BPF prog-id=22 op=LOAD Jan 16 21:16:49.438000 audit: BPF prog-id=23 op=LOAD Jan 16 21:16:49.443000 audit: BPF prog-id=24 op=LOAD Jan 16 21:16:49.451518 kernel: loop5: detected capacity change from 0 to 111560 Jan 16 21:16:49.452719 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 16 21:16:49.478000 audit: BPF prog-id=25 op=LOAD Jan 16 21:16:49.478000 audit: BPF prog-id=26 op=LOAD Jan 16 21:16:49.478000 audit: BPF prog-id=27 op=LOAD Jan 16 21:16:49.481776 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 16 21:16:49.498643 kernel: loop6: detected capacity change from 0 to 219144 Jan 16 21:16:49.536143 (sd-merge)[1282]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 16 21:16:49.544264 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Jan 16 21:16:49.544900 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Jan 16 21:16:49.555560 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 21:16:49.558601 (sd-merge)[1282]: Merged extensions into '/usr'. Jan 16 21:16:49.565775 systemd-nsresourced[1286]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 16 21:16:49.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:49.570543 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 16 21:16:49.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:49.585940 systemd[1]: Reload requested from client PID 1259 ('systemd-sysext') (unit systemd-sysext.service)... Jan 16 21:16:49.586555 systemd[1]: Reloading... Jan 16 21:16:49.759564 zram_generator::config[1331]: No configuration found. Jan 16 21:16:49.826618 systemd-oomd[1283]: No swap; memory pressure usage will be degraded Jan 16 21:16:49.840720 systemd-resolved[1284]: Positive Trust Anchors: Jan 16 21:16:49.840743 systemd-resolved[1284]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 21:16:49.840751 systemd-resolved[1284]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 16 21:16:49.840800 systemd-resolved[1284]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 21:16:49.850708 systemd-resolved[1284]: Defaulting to hostname 'linux'. Jan 16 21:16:50.115640 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 16 21:16:50.116008 systemd[1]: Reloading finished in 528 ms. Jan 16 21:16:50.186924 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 16 21:16:50.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:50.199947 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 16 21:16:50.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:50.213176 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 21:16:50.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:50.234937 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 16 21:16:50.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:50.254705 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 16 21:16:50.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:50.297792 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 21:16:50.335259 systemd[1]: Starting ensure-sysext.service... Jan 16 21:16:50.377836 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 21:16:50.390000 audit: BPF prog-id=8 op=UNLOAD Jan 16 21:16:50.390000 audit: BPF prog-id=7 op=UNLOAD Jan 16 21:16:50.393000 audit: BPF prog-id=28 op=LOAD Jan 16 21:16:50.393000 audit: BPF prog-id=29 op=LOAD Jan 16 21:16:50.395914 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 21:16:50.413000 audit: BPF prog-id=30 op=LOAD Jan 16 21:16:50.425000 audit: BPF prog-id=21 op=UNLOAD Jan 16 21:16:50.429000 audit: BPF prog-id=31 op=LOAD Jan 16 21:16:50.431000 audit: BPF prog-id=15 op=UNLOAD Jan 16 21:16:50.431000 audit: BPF prog-id=32 op=LOAD Jan 16 21:16:50.431000 audit: BPF prog-id=33 op=LOAD Jan 16 21:16:50.431000 audit: BPF prog-id=16 op=UNLOAD Jan 16 21:16:50.431000 audit: BPF prog-id=17 op=UNLOAD Jan 16 21:16:50.433000 audit: BPF prog-id=34 op=LOAD Jan 16 21:16:50.433000 audit: BPF prog-id=18 op=UNLOAD Jan 16 21:16:50.434000 audit: BPF prog-id=35 op=LOAD Jan 16 21:16:50.434000 audit: BPF prog-id=36 op=LOAD Jan 16 21:16:50.434000 audit: BPF prog-id=19 op=UNLOAD Jan 16 21:16:50.434000 audit: BPF prog-id=20 op=UNLOAD Jan 16 21:16:50.436000 audit: BPF prog-id=37 op=LOAD Jan 16 21:16:50.436000 audit: BPF prog-id=25 op=UNLOAD Jan 16 21:16:50.437000 audit: BPF prog-id=38 op=LOAD Jan 16 21:16:50.437000 audit: BPF prog-id=39 op=LOAD Jan 16 21:16:50.437000 audit: BPF prog-id=26 op=UNLOAD Jan 16 21:16:50.437000 audit: BPF prog-id=27 op=UNLOAD Jan 16 21:16:50.438000 audit: BPF prog-id=40 op=LOAD Jan 16 21:16:50.438000 audit: BPF prog-id=22 op=UNLOAD Jan 16 21:16:50.438000 audit: BPF prog-id=41 op=LOAD Jan 16 21:16:50.438000 audit: BPF prog-id=42 op=LOAD Jan 16 21:16:50.440000 audit: BPF prog-id=23 op=UNLOAD Jan 16 21:16:50.440000 audit: BPF prog-id=24 op=UNLOAD Jan 16 21:16:50.464924 systemd[1]: Reload requested from client PID 1368 ('systemctl') (unit ensure-sysext.service)... Jan 16 21:16:50.465020 systemd[1]: Reloading... Jan 16 21:16:50.468917 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 16 21:16:50.468980 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 16 21:16:50.469881 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 16 21:16:50.474590 systemd-tmpfiles[1369]: ACLs are not supported, ignoring. Jan 16 21:16:50.474815 systemd-tmpfiles[1369]: ACLs are not supported, ignoring. Jan 16 21:16:50.501041 systemd-udevd[1370]: Using default interface naming scheme 'v257'. Jan 16 21:16:50.507233 systemd-tmpfiles[1369]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 21:16:50.507252 systemd-tmpfiles[1369]: Skipping /boot Jan 16 21:16:50.534729 systemd-tmpfiles[1369]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 21:16:50.534818 systemd-tmpfiles[1369]: Skipping /boot Jan 16 21:16:50.632642 zram_generator::config[1402]: No configuration found. Jan 16 21:16:50.805559 kernel: mousedev: PS/2 mouse device common for all mice Jan 16 21:16:50.850518 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 16 21:16:50.865597 kernel: ACPI: button: Power Button [PWRF] Jan 16 21:16:50.889779 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 16 21:16:50.904100 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 16 21:16:51.194986 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 16 21:16:51.199061 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 16 21:16:51.213122 systemd[1]: Reloading finished in 747 ms. Jan 16 21:16:51.237006 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 21:16:51.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:51.292000 audit: BPF prog-id=43 op=LOAD Jan 16 21:16:51.294000 audit: BPF prog-id=31 op=UNLOAD Jan 16 21:16:51.294000 audit: BPF prog-id=44 op=LOAD Jan 16 21:16:51.295000 audit: BPF prog-id=45 op=LOAD Jan 16 21:16:51.295000 audit: BPF prog-id=32 op=UNLOAD Jan 16 21:16:51.295000 audit: BPF prog-id=33 op=UNLOAD Jan 16 21:16:51.324000 audit: BPF prog-id=46 op=LOAD Jan 16 21:16:51.325000 audit: BPF prog-id=37 op=UNLOAD Jan 16 21:16:51.330000 audit: BPF prog-id=47 op=LOAD Jan 16 21:16:51.330000 audit: BPF prog-id=48 op=LOAD Jan 16 21:16:51.330000 audit: BPF prog-id=38 op=UNLOAD Jan 16 21:16:51.330000 audit: BPF prog-id=39 op=UNLOAD Jan 16 21:16:51.337000 audit: BPF prog-id=49 op=LOAD Jan 16 21:16:51.337000 audit: BPF prog-id=34 op=UNLOAD Jan 16 21:16:51.342000 audit: BPF prog-id=50 op=LOAD Jan 16 21:16:51.343000 audit: BPF prog-id=51 op=LOAD Jan 16 21:16:51.343000 audit: BPF prog-id=35 op=UNLOAD Jan 16 21:16:51.343000 audit: BPF prog-id=36 op=UNLOAD Jan 16 21:16:51.355000 audit: BPF prog-id=52 op=LOAD Jan 16 21:16:51.356000 audit: BPF prog-id=30 op=UNLOAD Jan 16 21:16:51.374000 audit: BPF prog-id=53 op=LOAD Jan 16 21:16:51.376000 audit: BPF prog-id=40 op=UNLOAD Jan 16 21:16:51.377000 audit: BPF prog-id=54 op=LOAD Jan 16 21:16:51.392000 audit: BPF prog-id=55 op=LOAD Jan 16 21:16:51.392000 audit: BPF prog-id=41 op=UNLOAD Jan 16 21:16:51.393000 audit: BPF prog-id=42 op=UNLOAD Jan 16 21:16:51.395000 audit: BPF prog-id=56 op=LOAD Jan 16 21:16:51.396000 audit: BPF prog-id=57 op=LOAD Jan 16 21:16:51.396000 audit: BPF prog-id=28 op=UNLOAD Jan 16 21:16:51.396000 audit: BPF prog-id=29 op=UNLOAD Jan 16 21:16:51.420969 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 21:16:51.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:51.628059 systemd[1]: Finished ensure-sysext.service. Jan 16 21:16:51.653208 kernel: kvm_amd: TSC scaling supported Jan 16 21:16:51.653536 kernel: kvm_amd: Nested Virtualization enabled Jan 16 21:16:51.653590 kernel: kvm_amd: Nested Paging enabled Jan 16 21:16:51.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:16:51.663987 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 16 21:16:51.664093 kernel: kvm_amd: PMU virtualization is disabled Jan 16 21:16:51.703201 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 21:16:51.708930 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 16 21:16:51.940234 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 16 21:16:51.958042 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 21:16:51.962239 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 21:16:51.986039 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 21:16:52.002667 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 21:16:52.036500 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 21:16:52.062935 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 21:16:52.064588 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 16 21:16:52.070670 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 16 21:16:52.091073 kernel: EDAC MC: Ver: 3.0.0 Jan 16 21:16:52.102129 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 16 21:16:52.115260 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 16 21:16:52.121667 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 16 21:16:52.176089 kernel: kauditd_printk_skb: 117 callbacks suppressed Jan 16 21:16:52.176228 kernel: audit: type=1305 audit(1768598212.139:214): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 16 21:16:52.139000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 16 21:16:52.176630 augenrules[1512]: No rules Jan 16 21:16:52.139000 audit[1512]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffb1ccd140 a2=420 a3=0 items=0 ppid=1483 pid=1512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:16:52.184941 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 21:16:52.217913 kernel: audit: type=1300 audit(1768598212.139:214): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffb1ccd140 a2=420 a3=0 items=0 ppid=1483 pid=1512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:16:52.139000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 16 21:16:52.253590 kernel: audit: type=1327 audit(1768598212.139:214): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 16 21:16:52.240825 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 16 21:16:52.268901 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 16 21:16:52.271910 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 21:16:52.275802 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 21:16:52.278795 systemd[1]: audit-rules.service: Deactivated successfully. Jan 16 21:16:52.279596 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 16 21:16:52.287819 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 16 21:16:52.335896 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 21:16:52.346883 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 21:16:52.368943 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 21:16:52.372727 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 21:16:52.387760 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 21:16:52.388243 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 21:16:52.409122 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 21:16:52.409772 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 21:16:52.412016 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 16 21:16:52.426620 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 16 21:16:52.448537 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 21:16:52.448756 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 21:16:52.448883 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 16 21:16:52.481586 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 16 21:16:52.626962 systemd-networkd[1517]: lo: Link UP Jan 16 21:16:52.627043 systemd-networkd[1517]: lo: Gained carrier Jan 16 21:16:52.633012 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 21:16:52.634645 systemd-networkd[1517]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 16 21:16:52.634653 systemd-networkd[1517]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 21:16:52.636896 systemd-networkd[1517]: eth0: Link UP Jan 16 21:16:52.639576 systemd-networkd[1517]: eth0: Gained carrier Jan 16 21:16:52.639609 systemd-networkd[1517]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 16 21:16:52.663625 systemd-networkd[1517]: eth0: DHCPv4 address 10.0.0.34/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 16 21:16:52.668751 systemd-timesyncd[1518]: Network configuration changed, trying to establish connection. Jan 16 21:16:53.109832 systemd-timesyncd[1518]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 16 21:16:53.109977 systemd-timesyncd[1518]: Initial clock synchronization to Fri 2026-01-16 21:16:53.109728 UTC. Jan 16 21:16:53.112726 systemd-resolved[1284]: Clock change detected. Flushing caches. Jan 16 21:16:53.794652 ldconfig[1504]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 16 21:16:53.799140 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 16 21:16:53.812902 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 16 21:16:53.827548 systemd[1]: Reached target network.target - Network. Jan 16 21:16:53.840045 systemd[1]: Reached target time-set.target - System Time Set. Jan 16 21:16:53.844521 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 16 21:16:53.847676 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 16 21:16:53.863771 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 16 21:16:53.895929 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 21:16:53.912792 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 16 21:16:53.925163 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 21:16:53.937843 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 16 21:16:53.949632 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 16 21:16:53.962099 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 16 21:16:53.972732 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 16 21:16:53.982847 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 16 21:16:53.996233 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 16 21:16:54.009584 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 16 21:16:54.019105 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 16 21:16:54.031029 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 16 21:16:54.031153 systemd[1]: Reached target paths.target - Path Units. Jan 16 21:16:54.039838 systemd[1]: Reached target timers.target - Timer Units. Jan 16 21:16:54.052671 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 16 21:16:54.066090 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 16 21:16:54.078620 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 16 21:16:54.092022 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 16 21:16:54.105130 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 16 21:16:54.149182 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 16 21:16:54.159384 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 16 21:16:54.175096 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 16 21:16:54.189196 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 16 21:16:54.210005 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 21:16:54.221564 systemd[1]: Reached target basic.target - Basic System. Jan 16 21:16:54.237147 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 16 21:16:54.237865 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 16 21:16:54.245068 systemd[1]: Starting containerd.service - containerd container runtime... Jan 16 21:16:54.259854 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 16 21:16:54.271877 systemd-networkd[1517]: eth0: Gained IPv6LL Jan 16 21:16:54.276686 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 16 21:16:54.290223 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 16 21:16:54.320759 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 16 21:16:54.332920 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 16 21:16:54.338847 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 16 21:16:54.342629 jq[1553]: false Jan 16 21:16:54.357926 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 16 21:16:54.363507 extend-filesystems[1554]: Found /dev/vda6 Jan 16 21:16:54.376789 extend-filesystems[1554]: Found /dev/vda9 Jan 16 21:16:54.376789 extend-filesystems[1554]: Checking size of /dev/vda9 Jan 16 21:16:54.371702 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 16 21:16:54.378988 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 16 21:16:54.426625 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 16 21:16:54.439507 extend-filesystems[1554]: Resized partition /dev/vda9 Jan 16 21:16:54.445668 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 16 21:16:54.457896 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 16 21:16:54.459554 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 16 21:16:54.462067 systemd[1]: Starting update-engine.service - Update Engine... Jan 16 21:16:54.470626 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Refreshing passwd entry cache Jan 16 21:16:54.469159 oslogin_cache_refresh[1555]: Refreshing passwd entry cache Jan 16 21:16:54.476501 extend-filesystems[1569]: resize2fs 1.47.3 (8-Jul-2025) Jan 16 21:16:54.479165 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 16 21:16:54.510632 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Failure getting users, quitting Jan 16 21:16:54.510632 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 16 21:16:54.510632 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Refreshing group entry cache Jan 16 21:16:54.509499 oslogin_cache_refresh[1555]: Failure getting users, quitting Jan 16 21:16:54.509536 oslogin_cache_refresh[1555]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 16 21:16:54.509623 oslogin_cache_refresh[1555]: Refreshing group entry cache Jan 16 21:16:54.514072 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 16 21:16:54.530845 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 16 21:16:54.540725 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Failure getting groups, quitting Jan 16 21:16:54.540803 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 16 21:16:54.555961 oslogin_cache_refresh[1555]: Failure getting groups, quitting Jan 16 21:16:54.557102 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 16 21:16:54.556037 oslogin_cache_refresh[1555]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 16 21:16:54.557888 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 16 21:16:54.559608 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 16 21:16:54.560554 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 16 21:16:54.560940 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 16 21:16:54.563148 jq[1574]: true Jan 16 21:16:54.573496 update_engine[1573]: I20260116 21:16:54.572223 1573 main.cc:92] Flatcar Update Engine starting Jan 16 21:16:54.583923 systemd[1]: motdgen.service: Deactivated successfully. Jan 16 21:16:54.584679 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 16 21:16:54.613939 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 16 21:16:54.614676 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 16 21:16:54.733542 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 16 21:16:54.736643 systemd[1]: Reached target network-online.target - Network is Online. Jan 16 21:16:54.741185 jq[1590]: true Jan 16 21:16:54.759560 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 16 21:16:54.777814 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 21:16:54.780988 extend-filesystems[1569]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 16 21:16:54.780988 extend-filesystems[1569]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 16 21:16:54.780988 extend-filesystems[1569]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 16 21:16:54.844823 extend-filesystems[1554]: Resized filesystem in /dev/vda9 Jan 16 21:16:54.849132 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 16 21:16:54.878004 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 16 21:16:54.878749 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 16 21:16:54.908702 tar[1588]: linux-amd64/LICENSE Jan 16 21:16:54.923396 tar[1588]: linux-amd64/helm Jan 16 21:16:54.960557 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 16 21:16:54.961820 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 16 21:16:54.980145 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 16 21:16:55.340772 systemd-logind[1570]: Watching system buttons on /dev/input/event2 (Power Button) Jan 16 21:16:55.340815 systemd-logind[1570]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 16 21:16:55.344805 systemd-logind[1570]: New seat seat0. Jan 16 21:16:55.367411 kernel: hrtimer: interrupt took 4434138 ns Jan 16 21:16:55.401806 systemd[1]: Started systemd-logind.service - User Login Management. Jan 16 21:16:55.427170 dbus-daemon[1551]: [system] SELinux support is enabled Jan 16 21:16:55.427929 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 16 21:16:55.447169 update_engine[1573]: I20260116 21:16:55.447113 1573 update_check_scheduler.cc:74] Next update check in 10m36s Jan 16 21:16:55.488717 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 16 21:16:55.521883 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 16 21:16:55.522692 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 16 21:16:55.569836 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 16 21:16:55.570115 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 16 21:16:55.595936 systemd[1]: Started update-engine.service - Update Engine. Jan 16 21:16:55.610607 dbus-daemon[1551]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 16 21:16:55.618001 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 16 21:16:55.641961 bash[1639]: Updated "/home/core/.ssh/authorized_keys" Jan 16 21:16:55.649568 sshd_keygen[1589]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 16 21:16:55.650520 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 16 21:16:55.704543 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 16 21:16:56.043040 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 16 21:16:56.074182 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 16 21:16:56.430972 systemd[1]: issuegen.service: Deactivated successfully. Jan 16 21:16:56.431776 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 16 21:16:56.447705 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 16 21:16:56.937793 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 16 21:16:56.963584 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 16 21:16:56.976871 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 16 21:16:56.988843 systemd[1]: Reached target getty.target - Login Prompts. Jan 16 21:16:57.050599 locksmithd[1640]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 16 21:16:58.235904 containerd[1592]: time="2026-01-16T21:16:58Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 16 21:16:58.245639 containerd[1592]: time="2026-01-16T21:16:58.245590271Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 16 21:16:58.283417 containerd[1592]: time="2026-01-16T21:16:58.280874563Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="85.57µs" Jan 16 21:16:58.283417 containerd[1592]: time="2026-01-16T21:16:58.281003634Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 16 21:16:58.283417 containerd[1592]: time="2026-01-16T21:16:58.281596040Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 16 21:16:58.283417 containerd[1592]: time="2026-01-16T21:16:58.281620716Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 16 21:16:58.283417 containerd[1592]: time="2026-01-16T21:16:58.282822219Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 16 21:16:58.283417 containerd[1592]: time="2026-01-16T21:16:58.282847306Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 16 21:16:58.283417 containerd[1592]: time="2026-01-16T21:16:58.282934549Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 16 21:16:58.283417 containerd[1592]: time="2026-01-16T21:16:58.282952793Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 16 21:16:58.285547 containerd[1592]: time="2026-01-16T21:16:58.285419268Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 16 21:16:58.286043 containerd[1592]: time="2026-01-16T21:16:58.285607319Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 16 21:16:58.286616 containerd[1592]: time="2026-01-16T21:16:58.286591567Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 16 21:16:58.286853 containerd[1592]: time="2026-01-16T21:16:58.286833758Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 16 21:16:58.290556 containerd[1592]: time="2026-01-16T21:16:58.287553172Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 16 21:16:58.290668 containerd[1592]: time="2026-01-16T21:16:58.290646236Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 16 21:16:58.291601 containerd[1592]: time="2026-01-16T21:16:58.291568298Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 16 21:16:58.292173 containerd[1592]: time="2026-01-16T21:16:58.292146507Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 16 21:16:58.292740 containerd[1592]: time="2026-01-16T21:16:58.292710751Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 16 21:16:58.292839 containerd[1592]: time="2026-01-16T21:16:58.292816137Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 16 21:16:58.293584 containerd[1592]: time="2026-01-16T21:16:58.293561820Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 16 21:16:58.301104 containerd[1592]: time="2026-01-16T21:16:58.300424131Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 16 21:16:58.301104 containerd[1592]: time="2026-01-16T21:16:58.300647328Z" level=info msg="metadata content store policy set" policy=shared Jan 16 21:16:58.338762 containerd[1592]: time="2026-01-16T21:16:58.338624170Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 16 21:16:58.339793 containerd[1592]: time="2026-01-16T21:16:58.339755973Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 16 21:16:58.341093 containerd[1592]: time="2026-01-16T21:16:58.340849186Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 16 21:16:58.341093 containerd[1592]: time="2026-01-16T21:16:58.340961877Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 16 21:16:58.341093 containerd[1592]: time="2026-01-16T21:16:58.340990791Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 16 21:16:58.341093 containerd[1592]: time="2026-01-16T21:16:58.341009416Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 16 21:16:58.341093 containerd[1592]: time="2026-01-16T21:16:58.341027710Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 16 21:16:58.341093 containerd[1592]: time="2026-01-16T21:16:58.341043179Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 16 21:16:58.341093 containerd[1592]: time="2026-01-16T21:16:58.341057846Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 16 21:16:58.341093 containerd[1592]: time="2026-01-16T21:16:58.341072494Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 16 21:16:58.341093 containerd[1592]: time="2026-01-16T21:16:58.341086410Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 16 21:16:58.341564 containerd[1592]: time="2026-01-16T21:16:58.341122808Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 16 21:16:58.341564 containerd[1592]: time="2026-01-16T21:16:58.341141332Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 16 21:16:58.341564 containerd[1592]: time="2026-01-16T21:16:58.341157332Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 16 21:16:58.342658 containerd[1592]: time="2026-01-16T21:16:58.342369392Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 16 21:16:58.347544 tar[1588]: linux-amd64/README.md Jan 16 21:16:58.348817 containerd[1592]: time="2026-01-16T21:16:58.345521717Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 16 21:16:58.348817 containerd[1592]: time="2026-01-16T21:16:58.345558115Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 16 21:16:58.348817 containerd[1592]: time="2026-01-16T21:16:58.345574326Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 16 21:16:58.348817 containerd[1592]: time="2026-01-16T21:16:58.345593020Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 16 21:16:58.348817 containerd[1592]: time="2026-01-16T21:16:58.345608379Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 16 21:16:58.348817 containerd[1592]: time="2026-01-16T21:16:58.345625000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 16 21:16:58.348817 containerd[1592]: time="2026-01-16T21:16:58.345640299Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 16 21:16:58.348817 containerd[1592]: time="2026-01-16T21:16:58.345739293Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 16 21:16:58.348817 containerd[1592]: time="2026-01-16T21:16:58.345757718Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 16 21:16:58.348817 containerd[1592]: time="2026-01-16T21:16:58.345772786Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 16 21:16:58.348817 containerd[1592]: time="2026-01-16T21:16:58.345810126Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 16 21:16:58.348817 containerd[1592]: time="2026-01-16T21:16:58.346786528Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 16 21:16:58.348817 containerd[1592]: time="2026-01-16T21:16:58.346810504Z" level=info msg="Start snapshots syncer" Jan 16 21:16:58.348817 containerd[1592]: time="2026-01-16T21:16:58.347771420Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 16 21:16:58.349927 containerd[1592]: time="2026-01-16T21:16:58.349816989Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 16 21:16:58.350672 containerd[1592]: time="2026-01-16T21:16:58.350412331Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 16 21:16:58.351608 containerd[1592]: time="2026-01-16T21:16:58.350980251Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 16 21:16:58.351652 containerd[1592]: time="2026-01-16T21:16:58.351607923Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 16 21:16:58.351681 containerd[1592]: time="2026-01-16T21:16:58.351650783Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 16 21:16:58.351681 containerd[1592]: time="2026-01-16T21:16:58.351668155Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 16 21:16:58.351746 containerd[1592]: time="2026-01-16T21:16:58.351685608Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 16 21:16:58.351746 containerd[1592]: time="2026-01-16T21:16:58.351703731Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 16 21:16:58.351746 containerd[1592]: time="2026-01-16T21:16:58.351720894Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 16 21:16:58.351746 containerd[1592]: time="2026-01-16T21:16:58.351734239Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 16 21:16:58.351851 containerd[1592]: time="2026-01-16T21:16:58.351746992Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 16 21:16:58.351851 containerd[1592]: time="2026-01-16T21:16:58.351761139Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 16 21:16:58.351959 containerd[1592]: time="2026-01-16T21:16:58.351889168Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 16 21:16:58.351959 containerd[1592]: time="2026-01-16T21:16:58.351914204Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 16 21:16:58.351959 containerd[1592]: time="2026-01-16T21:16:58.351928571Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 16 21:16:58.351959 containerd[1592]: time="2026-01-16T21:16:58.351941255Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 16 21:16:58.352061 containerd[1592]: time="2026-01-16T21:16:58.351951915Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 16 21:16:58.352061 containerd[1592]: time="2026-01-16T21:16:58.351978054Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 16 21:16:58.352061 containerd[1592]: time="2026-01-16T21:16:58.351994174Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 16 21:16:58.352061 containerd[1592]: time="2026-01-16T21:16:58.352011416Z" level=info msg="runtime interface created" Jan 16 21:16:58.352061 containerd[1592]: time="2026-01-16T21:16:58.352020232Z" level=info msg="created NRI interface" Jan 16 21:16:58.352061 containerd[1592]: time="2026-01-16T21:16:58.352034739Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 16 21:16:58.352061 containerd[1592]: time="2026-01-16T21:16:58.352050369Z" level=info msg="Connect containerd service" Jan 16 21:16:58.352226 containerd[1592]: time="2026-01-16T21:16:58.352075856Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 16 21:16:58.358136 containerd[1592]: time="2026-01-16T21:16:58.358106773Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 16 21:16:58.395886 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 16 21:16:58.627575 containerd[1592]: time="2026-01-16T21:16:58.627180310Z" level=info msg="Start subscribing containerd event" Jan 16 21:16:58.628072 containerd[1592]: time="2026-01-16T21:16:58.627719376Z" level=info msg="Start recovering state" Jan 16 21:16:58.630535 containerd[1592]: time="2026-01-16T21:16:58.628162603Z" level=info msg="Start event monitor" Jan 16 21:16:58.630535 containerd[1592]: time="2026-01-16T21:16:58.628246781Z" level=info msg="Start cni network conf syncer for default" Jan 16 21:16:58.630535 containerd[1592]: time="2026-01-16T21:16:58.628610220Z" level=info msg="Start streaming server" Jan 16 21:16:58.630535 containerd[1592]: time="2026-01-16T21:16:58.628629124Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 16 21:16:58.630535 containerd[1592]: time="2026-01-16T21:16:58.628639975Z" level=info msg="runtime interface starting up..." Jan 16 21:16:58.630535 containerd[1592]: time="2026-01-16T21:16:58.628647699Z" level=info msg="starting plugins..." Jan 16 21:16:58.630535 containerd[1592]: time="2026-01-16T21:16:58.628668678Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 16 21:16:58.634940 containerd[1592]: time="2026-01-16T21:16:58.634814154Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 16 21:16:58.635092 containerd[1592]: time="2026-01-16T21:16:58.634995252Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 16 21:16:58.641892 containerd[1592]: time="2026-01-16T21:16:58.635900570Z" level=info msg="containerd successfully booted in 0.402970s" Jan 16 21:16:58.636111 systemd[1]: Started containerd.service - containerd container runtime. Jan 16 21:16:59.696773 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:16:59.714159 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 16 21:16:59.716723 systemd[1]: Startup finished in 21.064s (kernel) + 17.602s (initrd) + 14.916s (userspace) = 53.583s. Jan 16 21:16:59.744139 (kubelet)[1695]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 21:17:01.093205 kubelet[1695]: E0116 21:17:01.092705 1695 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 21:17:01.103783 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 21:17:01.104171 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 21:17:01.105102 systemd[1]: kubelet.service: Consumed 3.248s CPU time, 257.3M memory peak. Jan 16 21:17:03.732109 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 16 21:17:03.735888 systemd[1]: Started sshd@0-10.0.0.34:22-10.0.0.1:40698.service - OpenSSH per-connection server daemon (10.0.0.1:40698). Jan 16 21:17:04.122433 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 40698 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:17:04.135192 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:17:04.173140 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 16 21:17:04.176828 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 16 21:17:04.190016 systemd-logind[1570]: New session 1 of user core. Jan 16 21:17:04.252030 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 16 21:17:04.267757 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 16 21:17:04.335698 (systemd)[1715]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:17:04.358777 systemd-logind[1570]: New session 2 of user core. Jan 16 21:17:04.732703 systemd[1715]: Queued start job for default target default.target. Jan 16 21:17:04.751014 systemd[1715]: Created slice app.slice - User Application Slice. Jan 16 21:17:04.751150 systemd[1715]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 16 21:17:04.751171 systemd[1715]: Reached target paths.target - Paths. Jan 16 21:17:04.753980 systemd[1715]: Reached target timers.target - Timers. Jan 16 21:17:04.760153 systemd[1715]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 16 21:17:04.764191 systemd[1715]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 16 21:17:04.804907 systemd[1715]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 16 21:17:04.806706 systemd[1715]: Reached target sockets.target - Sockets. Jan 16 21:17:04.827040 systemd[1715]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 16 21:17:04.827618 systemd[1715]: Reached target basic.target - Basic System. Jan 16 21:17:04.827729 systemd[1715]: Reached target default.target - Main User Target. Jan 16 21:17:04.827784 systemd[1715]: Startup finished in 449ms. Jan 16 21:17:04.828063 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 16 21:17:04.847619 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 16 21:17:04.948614 systemd[1]: Started sshd@1-10.0.0.34:22-10.0.0.1:40702.service - OpenSSH per-connection server daemon (10.0.0.1:40702). Jan 16 21:17:05.139245 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 40702 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:17:05.142702 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:17:05.183666 systemd-logind[1570]: New session 3 of user core. Jan 16 21:17:05.193866 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 16 21:17:05.275814 sshd[1733]: Connection closed by 10.0.0.1 port 40702 Jan 16 21:17:05.279658 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Jan 16 21:17:05.303160 systemd[1]: Started sshd@2-10.0.0.34:22-10.0.0.1:40712.service - OpenSSH per-connection server daemon (10.0.0.1:40712). Jan 16 21:17:05.305758 systemd[1]: sshd@1-10.0.0.34:22-10.0.0.1:40702.service: Deactivated successfully. Jan 16 21:17:05.309743 systemd[1]: session-3.scope: Deactivated successfully. Jan 16 21:17:05.319044 systemd-logind[1570]: Session 3 logged out. Waiting for processes to exit. Jan 16 21:17:05.328141 systemd-logind[1570]: Removed session 3. Jan 16 21:17:05.492104 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 40712 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:17:05.496855 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:17:05.532126 systemd-logind[1570]: New session 4 of user core. Jan 16 21:17:05.617066 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 16 21:17:05.662642 sshd[1743]: Connection closed by 10.0.0.1 port 40712 Jan 16 21:17:05.664791 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Jan 16 21:17:05.680113 systemd[1]: sshd@2-10.0.0.34:22-10.0.0.1:40712.service: Deactivated successfully. Jan 16 21:17:05.687147 systemd[1]: session-4.scope: Deactivated successfully. Jan 16 21:17:05.693614 systemd-logind[1570]: Session 4 logged out. Waiting for processes to exit. Jan 16 21:17:05.699438 systemd[1]: Started sshd@3-10.0.0.34:22-10.0.0.1:40728.service - OpenSSH per-connection server daemon (10.0.0.1:40728). Jan 16 21:17:05.703627 systemd-logind[1570]: Removed session 4. Jan 16 21:17:05.819763 sshd[1749]: Accepted publickey for core from 10.0.0.1 port 40728 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:17:05.824229 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:17:05.843107 systemd-logind[1570]: New session 5 of user core. Jan 16 21:17:05.858806 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 16 21:17:05.904016 sshd[1753]: Connection closed by 10.0.0.1 port 40728 Jan 16 21:17:05.906008 sshd-session[1749]: pam_unix(sshd:session): session closed for user core Jan 16 21:17:05.922095 systemd[1]: sshd@3-10.0.0.34:22-10.0.0.1:40728.service: Deactivated successfully. Jan 16 21:17:05.925881 systemd[1]: session-5.scope: Deactivated successfully. Jan 16 21:17:05.930226 systemd-logind[1570]: Session 5 logged out. Waiting for processes to exit. Jan 16 21:17:05.935176 systemd[1]: Started sshd@4-10.0.0.34:22-10.0.0.1:40740.service - OpenSSH per-connection server daemon (10.0.0.1:40740). Jan 16 21:17:05.937928 systemd-logind[1570]: Removed session 5. Jan 16 21:17:06.067394 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 40740 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:17:06.071939 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:17:06.095176 systemd-logind[1570]: New session 6 of user core. Jan 16 21:17:06.113040 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 16 21:17:06.232141 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 16 21:17:06.233073 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 21:17:06.265917 sudo[1764]: pam_unix(sudo:session): session closed for user root Jan 16 21:17:06.270932 sshd[1763]: Connection closed by 10.0.0.1 port 40740 Jan 16 21:17:06.271726 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Jan 16 21:17:06.302214 systemd[1]: sshd@4-10.0.0.34:22-10.0.0.1:40740.service: Deactivated successfully. Jan 16 21:17:06.307731 systemd[1]: session-6.scope: Deactivated successfully. Jan 16 21:17:06.311607 systemd-logind[1570]: Session 6 logged out. Waiting for processes to exit. Jan 16 21:17:06.321017 systemd[1]: Started sshd@5-10.0.0.34:22-10.0.0.1:40746.service - OpenSSH per-connection server daemon (10.0.0.1:40746). Jan 16 21:17:06.323703 systemd-logind[1570]: Removed session 6. Jan 16 21:17:06.458859 sshd[1771]: Accepted publickey for core from 10.0.0.1 port 40746 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:17:06.464027 sshd-session[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:17:06.491403 systemd-logind[1570]: New session 7 of user core. Jan 16 21:17:06.513596 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 16 21:17:06.591193 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 16 21:17:06.596945 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 21:17:06.631759 sudo[1777]: pam_unix(sudo:session): session closed for user root Jan 16 21:17:06.660049 sudo[1776]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 16 21:17:06.660972 sudo[1776]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 21:17:06.694100 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 16 21:17:06.850000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 16 21:17:06.856160 augenrules[1801]: No rules Jan 16 21:17:06.861472 systemd[1]: audit-rules.service: Deactivated successfully. Jan 16 21:17:06.862142 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 16 21:17:06.866748 sudo[1776]: pam_unix(sudo:session): session closed for user root Jan 16 21:17:06.850000 audit[1801]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd3d0b5830 a2=420 a3=0 items=0 ppid=1782 pid=1801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:06.874203 sshd[1775]: Connection closed by 10.0.0.1 port 40746 Jan 16 21:17:06.874683 sshd-session[1771]: pam_unix(sshd:session): session closed for user core Jan 16 21:17:06.892087 systemd[1]: sshd@5-10.0.0.34:22-10.0.0.1:40746.service: Deactivated successfully. Jan 16 21:17:06.895694 systemd[1]: session-7.scope: Deactivated successfully. Jan 16 21:17:06.902768 systemd-logind[1570]: Session 7 logged out. Waiting for processes to exit. Jan 16 21:17:06.905861 systemd-logind[1570]: Removed session 7. Jan 16 21:17:06.914649 kernel: audit: type=1305 audit(1768598226.850:215): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 16 21:17:06.914784 kernel: audit: type=1300 audit(1768598226.850:215): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd3d0b5830 a2=420 a3=0 items=0 ppid=1782 pid=1801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:06.850000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 16 21:17:06.915808 kernel: audit: type=1327 audit(1768598226.850:215): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 16 21:17:06.935940 kernel: audit: type=1130 audit(1768598226.861:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:06.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:06.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:06.993451 kernel: audit: type=1131 audit(1768598226.861:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:06.993685 kernel: audit: type=1106 audit(1768598226.865:218): pid=1776 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:17:06.865000 audit[1776]: USER_END pid=1776 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:17:07.026701 kernel: audit: type=1104 audit(1768598226.866:219): pid=1776 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:17:06.866000 audit[1776]: CRED_DISP pid=1776 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:17:06.883000 audit[1771]: USER_END pid=1771 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:17:07.102684 kernel: audit: type=1106 audit(1768598226.883:220): pid=1771 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:17:07.102879 kernel: audit: type=1104 audit(1768598226.883:221): pid=1771 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:17:06.883000 audit[1771]: CRED_DISP pid=1771 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:17:07.136716 kernel: audit: type=1131 audit(1768598226.891:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.34:22-10.0.0.1:40746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:06.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.34:22-10.0.0.1:40746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:07.192183 systemd[1]: Started sshd@6-10.0.0.34:22-10.0.0.1:40760.service - OpenSSH per-connection server daemon (10.0.0.1:40760). Jan 16 21:17:07.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.34:22-10.0.0.1:40760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:07.318866 sshd[1810]: Accepted publickey for core from 10.0.0.1 port 40760 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:17:07.316000 audit[1810]: USER_ACCT pid=1810 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:17:07.319000 audit[1810]: CRED_ACQ pid=1810 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:17:07.320000 audit[1810]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec2869270 a2=3 a3=0 items=0 ppid=1 pid=1810 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:07.320000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:17:07.322831 sshd-session[1810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:17:07.354142 systemd-logind[1570]: New session 8 of user core. Jan 16 21:17:07.372019 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 16 21:17:07.391000 audit[1810]: USER_START pid=1810 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:17:07.397000 audit[1814]: CRED_ACQ pid=1814 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:17:07.449000 audit[1815]: USER_ACCT pid=1815 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:17:07.451927 sudo[1815]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 16 21:17:07.450000 audit[1815]: CRED_REFR pid=1815 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:17:07.452942 sudo[1815]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 21:17:07.451000 audit[1815]: USER_START pid=1815 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:17:08.475645 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 16 21:17:08.502237 (dockerd)[1837]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 16 21:17:10.850000 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1647026318 wd_nsec: 1647026232 Jan 16 21:17:11.040785 dockerd[1837]: time="2026-01-16T21:17:11.040029505Z" level=info msg="Starting up" Jan 16 21:17:11.046949 dockerd[1837]: time="2026-01-16T21:17:11.045866893Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 16 21:17:11.133651 dockerd[1837]: time="2026-01-16T21:17:11.132669894Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 16 21:17:11.191035 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 16 21:17:11.211600 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 21:17:11.362832 dockerd[1837]: time="2026-01-16T21:17:11.362759676Z" level=info msg="Loading containers: start." Jan 16 21:17:11.414749 kernel: Initializing XFRM netlink socket Jan 16 21:17:11.947000 audit[1892]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1892 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:17:11.987733 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 16 21:17:11.987979 kernel: audit: type=1325 audit(1768598231.947:232): table=nat:2 family=2 entries=2 op=nft_register_chain pid=1892 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:17:11.947000 audit[1892]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffdf8d20040 a2=0 a3=0 items=0 ppid=1837 pid=1892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:12.032694 kernel: audit: type=1300 audit(1768598231.947:232): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffdf8d20040 a2=0 a3=0 items=0 ppid=1837 pid=1892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:11.947000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 16 21:17:12.064787 kernel: audit: type=1327 audit(1768598231.947:232): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 16 21:17:11.973000 audit[1894]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1894 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:17:11.973000 audit[1894]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe72ba5380 a2=0 a3=0 items=0 ppid=1837 pid=1894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:12.132964 kernel: audit: type=1325 audit(1768598231.973:233): table=filter:3 family=2 entries=2 op=nft_register_chain pid=1894 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:17:12.133751 kernel: audit: type=1300 audit(1768598231.973:233): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe72ba5380 a2=0 a3=0 items=0 ppid=1837 pid=1894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:12.133856 kernel: audit: type=1327 audit(1768598231.973:233): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 16 21:17:11.973000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 16 21:17:11.995000 audit[1896]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1896 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:17:12.181229 kernel: audit: type=1325 audit(1768598231.995:234): table=filter:4 family=2 entries=1 op=nft_register_chain pid=1896 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:17:12.181640 kernel: audit: type=1300 audit(1768598231.995:234): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe62906bf0 a2=0 a3=0 items=0 ppid=1837 pid=1896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:11.995000 audit[1896]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe62906bf0 a2=0 a3=0 items=0 ppid=1837 pid=1896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:12.225712 kernel: audit: type=1327 audit(1768598231.995:234): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 16 21:17:11.995000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 16 21:17:12.249792 kernel: audit: type=1325 audit(1768598232.008:235): table=filter:5 family=2 entries=1 op=nft_register_chain pid=1898 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:17:12.008000 audit[1898]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1898 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:17:12.008000 audit[1898]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc3be73c50 a2=0 a3=0 items=0 ppid=1837 pid=1898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:12.008000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 16 21:17:12.025000 audit[1900]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1900 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:17:12.025000 audit[1900]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd06277e50 a2=0 a3=0 items=0 ppid=1837 pid=1900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:12.025000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 16 21:17:12.040000 audit[1903]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1903 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:17:12.040000 audit[1903]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe38416a70 a2=0 a3=0 items=0 ppid=1837 pid=1903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:12.040000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 16 21:17:12.060000 audit[1906]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1906 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:17:12.060000 audit[1906]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd13e1dec0 a2=0 a3=0 items=0 ppid=1837 pid=1906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:12.060000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 16 21:17:12.320464 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:17:12.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:12.359707 (kubelet)[1913]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 21:17:12.077000 audit[1908]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1908 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:17:12.077000 audit[1908]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffd96a87300 a2=0 a3=0 items=0 ppid=1837 pid=1908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:12.077000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 16 21:17:12.392000 audit[1919]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1919 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:17:12.392000 audit[1919]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7fff692e6290 a2=0 a3=0 items=0 ppid=1837 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:12.392000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 16 21:17:12.407000 audit[1922]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1922 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:17:12.407000 audit[1922]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffef2536e50 a2=0 a3=0 items=0 ppid=1837 pid=1922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:12.407000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 16 21:17:12.424000 audit[1924]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1924 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:17:12.424000 audit[1924]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffca4c17da0 a2=0 a3=0 items=0 ppid=1837 pid=1924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:12.424000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 16 21:17:12.443000 audit[1927]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1927 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:17:12.443000 audit[1927]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffea45b81c0 a2=0 a3=0 items=0 ppid=1837 pid=1927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:12.443000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 16 21:17:12.461000 audit[1929]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1929 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:17:12.461000 audit[1929]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffeba640c70 a2=0 a3=0 items=0 ppid=1837 pid=1929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:12.461000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 16 21:17:12.616060 kubelet[1913]: E0116 21:17:12.615063 1913 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 21:17:12.627000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 21:17:12.627954 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 21:17:12.631665 systemd[1]: kubelet.service: Consumed 953ms CPU time, 110.6M memory peak. Jan 16 21:17:12.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 21:17:12.815000 audit[1961]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1961 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:17:12.815000 audit[1961]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd1050a6d0 a2=0 a3=0 items=0 ppid=1837 pid=1961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:12.815000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 16 21:17:12.842000 audit[1963]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1963 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:17:12.842000 audit[1963]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd81465130 a2=0 a3=0 items=0 ppid=1837 pid=1963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:12.842000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 16 21:17:12.881000 audit[1965]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1965 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:17:12.881000 audit[1965]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd6853710 a2=0 a3=0 items=0 ppid=1837 pid=1965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:12.881000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 16 21:17:12.906000 audit[1967]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1967 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:17:12.906000 audit[1967]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff2d500f0 a2=0 a3=0 items=0 ppid=1837 pid=1967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:12.906000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 16 21:17:12.934000 audit[1969]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1969 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:17:12.934000 audit[1969]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd280bd2f0 a2=0 a3=0 items=0 ppid=1837 pid=1969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:12.934000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 16 21:17:12.958000 audit[1971]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1971 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:17:12.958000 audit[1971]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffd860f6c0 a2=0 a3=0 items=0 ppid=1837 pid=1971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:12.958000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 16 21:17:12.976000 audit[1973]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1973 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:17:12.976000 audit[1973]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd75818c60 a2=0 a3=0 items=0 ppid=1837 pid=1973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:12.976000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 16 21:17:13.003000 audit[1975]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1975 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:17:13.003000 audit[1975]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffd8cb58c10 a2=0 a3=0 items=0 ppid=1837 pid=1975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:13.003000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 16 21:17:13.040000 audit[1977]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1977 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:17:13.040000 audit[1977]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffd01588c40 a2=0 a3=0 items=0 ppid=1837 pid=1977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:13.040000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 16 21:17:13.064000 audit[1979]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1979 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:17:13.064000 audit[1979]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc8bc61580 a2=0 a3=0 items=0 ppid=1837 pid=1979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:13.064000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 16 21:17:13.084000 audit[1981]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1981 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:17:13.084000 audit[1981]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffd851014a0 a2=0 a3=0 items=0 ppid=1837 pid=1981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:13.084000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 16 21:17:13.110000 audit[1983]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1983 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:17:13.110000 audit[1983]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fffd8422540 a2=0 a3=0 items=0 ppid=1837 pid=1983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:13.110000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 16 21:17:13.133000 audit[1985]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1985 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:17:13.133000 audit[1985]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffd311696c0 a2=0 a3=0 items=0 ppid=1837 pid=1985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:13.133000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 16 21:17:13.188000 audit[1990]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1990 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:17:13.188000 audit[1990]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffd1324ae0 a2=0 a3=0 items=0 ppid=1837 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:13.188000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 16 21:17:13.206000 audit[1992]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1992 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:17:13.206000 audit[1992]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc5d592fd0 a2=0 a3=0 items=0 ppid=1837 pid=1992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:13.206000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 16 21:17:13.229000 audit[1994]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1994 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:17:13.229000 audit[1994]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffd0228aa0 a2=0 a3=0 items=0 ppid=1837 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:13.229000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 16 21:17:13.252000 audit[1996]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=1996 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:17:13.252000 audit[1996]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff8a135ae0 a2=0 a3=0 items=0 ppid=1837 pid=1996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:13.252000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 16 21:17:13.269000 audit[1998]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=1998 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:17:13.269000 audit[1998]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd59893090 a2=0 a3=0 items=0 ppid=1837 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:13.269000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 16 21:17:13.287000 audit[2000]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2000 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:17:13.287000 audit[2000]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff2e981c40 a2=0 a3=0 items=0 ppid=1837 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:13.287000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 16 21:17:13.455000 audit[2006]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2006 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:17:13.455000 audit[2006]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffc9b2113c0 a2=0 a3=0 items=0 ppid=1837 pid=2006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:13.455000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 16 21:17:13.477000 audit[2008]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2008 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:17:13.477000 audit[2008]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff5764c8c0 a2=0 a3=0 items=0 ppid=1837 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:13.477000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 16 21:17:13.572000 audit[2016]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2016 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:17:13.572000 audit[2016]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffd24c120a0 a2=0 a3=0 items=0 ppid=1837 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:13.572000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 16 21:17:13.668000 audit[2022]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2022 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:17:13.668000 audit[2022]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffea5758500 a2=0 a3=0 items=0 ppid=1837 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:13.668000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 16 21:17:13.694000 audit[2024]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:17:13.694000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffc30bde340 a2=0 a3=0 items=0 ppid=1837 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:13.694000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 16 21:17:13.715000 audit[2026]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:17:13.715000 audit[2026]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd2512ec90 a2=0 a3=0 items=0 ppid=1837 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:13.715000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 16 21:17:13.737000 audit[2028]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:17:13.737000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffd760fc200 a2=0 a3=0 items=0 ppid=1837 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:13.737000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 16 21:17:13.762000 audit[2030]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:17:13.762000 audit[2030]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcf1173110 a2=0 a3=0 items=0 ppid=1837 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:17:13.762000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 16 21:17:13.767773 systemd-networkd[1517]: docker0: Link UP Jan 16 21:17:13.783147 dockerd[1837]: time="2026-01-16T21:17:13.782708968Z" level=info msg="Loading containers: done." Jan 16 21:17:13.906195 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1270557002-merged.mount: Deactivated successfully. Jan 16 21:17:13.931761 dockerd[1837]: time="2026-01-16T21:17:13.930950337Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 16 21:17:13.931761 dockerd[1837]: time="2026-01-16T21:17:13.931116678Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 16 21:17:13.931761 dockerd[1837]: time="2026-01-16T21:17:13.931237513Z" level=info msg="Initializing buildkit" Jan 16 21:17:14.104676 dockerd[1837]: time="2026-01-16T21:17:14.103141646Z" level=info msg="Completed buildkit initialization" Jan 16 21:17:14.131937 dockerd[1837]: time="2026-01-16T21:17:14.131694175Z" level=info msg="Daemon has completed initialization" Jan 16 21:17:14.131937 dockerd[1837]: time="2026-01-16T21:17:14.131872888Z" level=info msg="API listen on /run/docker.sock" Jan 16 21:17:14.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:14.133838 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 16 21:17:16.534660 containerd[1592]: time="2026-01-16T21:17:16.533817995Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 16 21:17:18.008043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1105490917.mount: Deactivated successfully. Jan 16 21:17:22.679060 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 16 21:17:22.684771 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 21:17:23.153870 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:17:23.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:23.165856 kernel: kauditd_printk_skb: 113 callbacks suppressed Jan 16 21:17:23.165955 kernel: audit: type=1130 audit(1768598243.152:275): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:23.208740 (kubelet)[2141]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 21:17:23.341751 containerd[1592]: time="2026-01-16T21:17:23.340504342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:17:23.346753 containerd[1592]: time="2026-01-16T21:17:23.346016772Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=25531765" Jan 16 21:17:23.356529 containerd[1592]: time="2026-01-16T21:17:23.356053430Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:17:23.378934 containerd[1592]: time="2026-01-16T21:17:23.378879701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:17:23.384004 containerd[1592]: time="2026-01-16T21:17:23.383183881Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 6.849138221s" Jan 16 21:17:23.384004 containerd[1592]: time="2026-01-16T21:17:23.383746421Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 16 21:17:23.386192 containerd[1592]: time="2026-01-16T21:17:23.385914186Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 16 21:17:23.472219 kubelet[2141]: E0116 21:17:23.471877 2141 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 21:17:23.481550 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 21:17:23.481937 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 21:17:23.483156 systemd[1]: kubelet.service: Consumed 607ms CPU time, 110.3M memory peak. Jan 16 21:17:23.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 21:17:23.520684 kernel: audit: type=1131 audit(1768598243.481:276): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 21:17:31.340958 containerd[1592]: time="2026-01-16T21:17:31.339677721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:17:31.345799 containerd[1592]: time="2026-01-16T21:17:31.345145285Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21154285" Jan 16 21:17:31.359776 containerd[1592]: time="2026-01-16T21:17:31.359117162Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:17:31.378649 containerd[1592]: time="2026-01-16T21:17:31.378138560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:17:31.381038 containerd[1592]: time="2026-01-16T21:17:31.380610257Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 7.994554768s" Jan 16 21:17:31.382223 containerd[1592]: time="2026-01-16T21:17:31.381860824Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 16 21:17:31.389732 containerd[1592]: time="2026-01-16T21:17:31.389100099Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 16 21:17:33.822879 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 16 21:17:33.836834 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 21:17:36.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:36.444619 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:17:36.505801 kernel: audit: type=1130 audit(1768598256.443:277): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:36.515186 (kubelet)[2167]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 21:17:36.812611 kubelet[2167]: E0116 21:17:36.812009 2167 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 21:17:36.821004 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 21:17:36.823669 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 21:17:36.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 21:17:36.825131 systemd[1]: kubelet.service: Consumed 2.008s CPU time, 110M memory peak. Jan 16 21:17:36.858966 kernel: audit: type=1131 audit(1768598256.822:278): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 21:17:38.466865 containerd[1592]: time="2026-01-16T21:17:38.465902743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:17:38.474977 containerd[1592]: time="2026-01-16T21:17:38.474153221Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15721026" Jan 16 21:17:38.484755 containerd[1592]: time="2026-01-16T21:17:38.483212236Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:17:38.493185 containerd[1592]: time="2026-01-16T21:17:38.492880378Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 7.10364977s" Jan 16 21:17:38.493185 containerd[1592]: time="2026-01-16T21:17:38.493050845Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 16 21:17:38.493870 containerd[1592]: time="2026-01-16T21:17:38.491149438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:17:38.496174 containerd[1592]: time="2026-01-16T21:17:38.495893981Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 16 21:17:41.165819 update_engine[1573]: I20260116 21:17:41.163691 1573 update_attempter.cc:509] Updating boot flags... Jan 16 21:17:43.333110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3798056423.mount: Deactivated successfully. Jan 16 21:17:46.931982 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 16 21:17:46.942742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 21:17:47.022098 containerd[1592]: time="2026-01-16T21:17:47.020941313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:17:47.041760 containerd[1592]: time="2026-01-16T21:17:47.041130779Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25961571" Jan 16 21:17:47.045654 containerd[1592]: time="2026-01-16T21:17:47.043734515Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:17:47.050598 containerd[1592]: time="2026-01-16T21:17:47.049714400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:17:47.051055 containerd[1592]: time="2026-01-16T21:17:47.051002417Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 8.554970713s" Jan 16 21:17:47.052214 containerd[1592]: time="2026-01-16T21:17:47.051202710Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 16 21:17:47.057657 containerd[1592]: time="2026-01-16T21:17:47.056944769Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 16 21:17:47.864850 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:17:47.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:47.911868 kernel: audit: type=1130 audit(1768598267.864:279): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:17:47.945202 (kubelet)[2209]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 21:17:48.429657 kubelet[2209]: E0116 21:17:48.428695 2209 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 21:17:48.440646 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 21:17:48.440898 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 21:17:48.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 21:17:48.449745 systemd[1]: kubelet.service: Consumed 856ms CPU time, 110.1M memory peak. Jan 16 21:17:48.484898 kernel: audit: type=1131 audit(1768598268.441:280): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 21:17:49.041888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount323547669.mount: Deactivated successfully. Jan 16 21:17:58.715157 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 16 21:17:58.926194 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 21:18:00.423749 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:18:00.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:18:00.469000 kernel: audit: type=1130 audit(1768598280.428:281): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:18:00.488116 (kubelet)[2277]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 21:18:00.700117 containerd[1592]: time="2026-01-16T21:18:00.696785037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:18:00.703580 containerd[1592]: time="2026-01-16T21:18:00.703534798Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22379735" Jan 16 21:18:00.707178 containerd[1592]: time="2026-01-16T21:18:00.707125288Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:18:00.717040 containerd[1592]: time="2026-01-16T21:18:00.717008924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:18:00.719212 containerd[1592]: time="2026-01-16T21:18:00.719181944Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 13.662193142s" Jan 16 21:18:00.719930 containerd[1592]: time="2026-01-16T21:18:00.719689710Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 16 21:18:00.727071 containerd[1592]: time="2026-01-16T21:18:00.725736843Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 16 21:18:01.368048 kubelet[2277]: E0116 21:18:01.367800 2277 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 21:18:01.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 21:18:01.375198 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 21:18:01.375916 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 21:18:01.377725 systemd[1]: kubelet.service: Consumed 1.799s CPU time, 110.5M memory peak. Jan 16 21:18:01.412687 kernel: audit: type=1131 audit(1768598281.376:282): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 21:18:01.723120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3098989065.mount: Deactivated successfully. Jan 16 21:18:01.751807 containerd[1592]: time="2026-01-16T21:18:01.751063300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:18:01.757950 containerd[1592]: time="2026-01-16T21:18:01.757815101Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Jan 16 21:18:01.775859 containerd[1592]: time="2026-01-16T21:18:01.774839992Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:18:01.785619 containerd[1592]: time="2026-01-16T21:18:01.785184169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:18:01.795084 containerd[1592]: time="2026-01-16T21:18:01.793202452Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.067309197s" Jan 16 21:18:01.795084 containerd[1592]: time="2026-01-16T21:18:01.794740168Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 16 21:18:01.822903 containerd[1592]: time="2026-01-16T21:18:01.821793524Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 16 21:18:03.720788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2904887457.mount: Deactivated successfully. Jan 16 21:18:11.428227 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 16 21:18:11.438861 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 21:18:12.221138 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:18:12.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:18:12.263768 kernel: audit: type=1130 audit(1768598292.221:283): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:18:12.283072 (kubelet)[2349]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 21:18:12.689228 kubelet[2349]: E0116 21:18:12.689184 2349 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 21:18:12.714769 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 21:18:12.716971 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 21:18:12.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 21:18:12.718742 systemd[1]: kubelet.service: Consumed 725ms CPU time, 110.4M memory peak. Jan 16 21:18:12.783843 kernel: audit: type=1131 audit(1768598292.717:284): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 21:18:22.944933 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 16 21:18:22.979208 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 21:18:25.184062 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:18:25.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:18:25.248097 kernel: audit: type=1130 audit(1768598305.200:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:18:25.416075 (kubelet)[2371]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 21:18:26.688636 containerd[1592]: time="2026-01-16T21:18:26.686975135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:18:26.715645 containerd[1592]: time="2026-01-16T21:18:26.713124415Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=72895754" Jan 16 21:18:26.743544 containerd[1592]: time="2026-01-16T21:18:26.743032778Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:18:26.770989 containerd[1592]: time="2026-01-16T21:18:26.770566923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:18:26.773144 containerd[1592]: time="2026-01-16T21:18:26.773008330Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 24.950777651s" Jan 16 21:18:26.773144 containerd[1592]: time="2026-01-16T21:18:26.773049727Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 16 21:18:26.914747 kubelet[2371]: E0116 21:18:26.914193 2371 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 21:18:26.925500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 21:18:26.925827 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 21:18:26.927104 systemd[1]: kubelet.service: Consumed 2.386s CPU time, 110.7M memory peak. Jan 16 21:18:26.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 21:18:26.971798 kernel: audit: type=1131 audit(1768598306.926:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 21:18:35.346873 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:18:35.347113 systemd[1]: kubelet.service: Consumed 2.386s CPU time, 110.7M memory peak. Jan 16 21:18:35.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:18:35.378525 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 21:18:35.411018 kernel: audit: type=1130 audit(1768598315.346:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:18:35.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:18:35.444188 kernel: audit: type=1131 audit(1768598315.346:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:18:35.504959 systemd[1]: Reload requested from client PID 2409 ('systemctl') (unit session-8.scope)... Jan 16 21:18:35.507228 systemd[1]: Reloading... Jan 16 21:18:35.769506 zram_generator::config[2451]: No configuration found. Jan 16 21:18:36.594210 systemd[1]: Reloading finished in 1085 ms. Jan 16 21:18:36.641000 audit: BPF prog-id=63 op=LOAD Jan 16 21:18:36.689056 kernel: audit: type=1334 audit(1768598316.641:289): prog-id=63 op=LOAD Jan 16 21:18:36.689179 kernel: audit: type=1334 audit(1768598316.641:290): prog-id=46 op=UNLOAD Jan 16 21:18:36.641000 audit: BPF prog-id=46 op=UNLOAD Jan 16 21:18:36.641000 audit: BPF prog-id=64 op=LOAD Jan 16 21:18:36.641000 audit: BPF prog-id=65 op=LOAD Jan 16 21:18:36.711485 kernel: audit: type=1334 audit(1768598316.641:291): prog-id=64 op=LOAD Jan 16 21:18:36.711576 kernel: audit: type=1334 audit(1768598316.641:292): prog-id=65 op=LOAD Jan 16 21:18:36.641000 audit: BPF prog-id=47 op=UNLOAD Jan 16 21:18:36.731540 kernel: audit: type=1334 audit(1768598316.641:293): prog-id=47 op=UNLOAD Jan 16 21:18:36.731669 kernel: audit: type=1334 audit(1768598316.641:294): prog-id=48 op=UNLOAD Jan 16 21:18:36.641000 audit: BPF prog-id=48 op=UNLOAD Jan 16 21:18:36.641000 audit: BPF prog-id=66 op=LOAD Jan 16 21:18:36.753663 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 16 21:18:36.755081 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 16 21:18:36.763098 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:18:36.763157 systemd[1]: kubelet.service: Consumed 323ms CPU time, 98.2M memory peak. Jan 16 21:18:36.775188 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 21:18:36.641000 audit: BPF prog-id=67 op=LOAD Jan 16 21:18:36.790929 kernel: audit: type=1334 audit(1768598316.641:295): prog-id=66 op=LOAD Jan 16 21:18:36.791064 kernel: audit: type=1334 audit(1768598316.641:296): prog-id=67 op=LOAD Jan 16 21:18:36.641000 audit: BPF prog-id=56 op=UNLOAD Jan 16 21:18:36.641000 audit: BPF prog-id=57 op=UNLOAD Jan 16 21:18:36.652000 audit: BPF prog-id=68 op=LOAD Jan 16 21:18:36.652000 audit: BPF prog-id=59 op=UNLOAD Jan 16 21:18:36.655000 audit: BPF prog-id=69 op=LOAD Jan 16 21:18:36.655000 audit: BPF prog-id=58 op=UNLOAD Jan 16 21:18:36.667000 audit: BPF prog-id=70 op=LOAD Jan 16 21:18:36.667000 audit: BPF prog-id=60 op=UNLOAD Jan 16 21:18:36.667000 audit: BPF prog-id=71 op=LOAD Jan 16 21:18:36.667000 audit: BPF prog-id=72 op=LOAD Jan 16 21:18:36.667000 audit: BPF prog-id=61 op=UNLOAD Jan 16 21:18:36.667000 audit: BPF prog-id=62 op=UNLOAD Jan 16 21:18:36.671000 audit: BPF prog-id=73 op=LOAD Jan 16 21:18:36.671000 audit: BPF prog-id=43 op=UNLOAD Jan 16 21:18:36.671000 audit: BPF prog-id=74 op=LOAD Jan 16 21:18:36.671000 audit: BPF prog-id=75 op=LOAD Jan 16 21:18:36.671000 audit: BPF prog-id=44 op=UNLOAD Jan 16 21:18:36.671000 audit: BPF prog-id=45 op=UNLOAD Jan 16 21:18:36.672000 audit: BPF prog-id=76 op=LOAD Jan 16 21:18:36.672000 audit: BPF prog-id=49 op=UNLOAD Jan 16 21:18:36.672000 audit: BPF prog-id=77 op=LOAD Jan 16 21:18:36.672000 audit: BPF prog-id=78 op=LOAD Jan 16 21:18:36.673000 audit: BPF prog-id=50 op=UNLOAD Jan 16 21:18:36.673000 audit: BPF prog-id=51 op=UNLOAD Jan 16 21:18:36.674000 audit: BPF prog-id=79 op=LOAD Jan 16 21:18:36.674000 audit: BPF prog-id=52 op=UNLOAD Jan 16 21:18:36.679000 audit: BPF prog-id=80 op=LOAD Jan 16 21:18:36.679000 audit: BPF prog-id=53 op=UNLOAD Jan 16 21:18:36.679000 audit: BPF prog-id=81 op=LOAD Jan 16 21:18:36.679000 audit: BPF prog-id=82 op=LOAD Jan 16 21:18:36.679000 audit: BPF prog-id=54 op=UNLOAD Jan 16 21:18:36.679000 audit: BPF prog-id=55 op=UNLOAD Jan 16 21:18:36.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 21:18:37.471562 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:18:37.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:18:37.496982 (kubelet)[2501]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 21:18:38.113670 kubelet[2501]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 16 21:18:38.113670 kubelet[2501]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 21:18:38.115699 kubelet[2501]: I0116 21:18:38.113905 2501 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 21:18:40.212527 kubelet[2501]: I0116 21:18:40.211761 2501 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 16 21:18:40.212527 kubelet[2501]: I0116 21:18:40.211984 2501 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 21:18:40.212527 kubelet[2501]: I0116 21:18:40.212572 2501 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 16 21:18:40.212527 kubelet[2501]: I0116 21:18:40.212593 2501 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 16 21:18:40.213874 kubelet[2501]: I0116 21:18:40.212971 2501 server.go:956] "Client rotation is on, will bootstrap in background" Jan 16 21:18:40.241838 kubelet[2501]: E0116 21:18:40.241445 2501 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 16 21:18:40.245633 kubelet[2501]: I0116 21:18:40.245058 2501 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 21:18:40.299506 kubelet[2501]: I0116 21:18:40.297877 2501 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 16 21:18:40.314726 kubelet[2501]: I0116 21:18:40.314598 2501 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 16 21:18:40.315244 kubelet[2501]: I0116 21:18:40.314988 2501 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 21:18:40.315937 kubelet[2501]: I0116 21:18:40.315116 2501 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 16 21:18:40.315937 kubelet[2501]: I0116 21:18:40.315652 2501 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 21:18:40.315937 kubelet[2501]: I0116 21:18:40.315668 2501 container_manager_linux.go:306] "Creating device plugin manager" Jan 16 21:18:40.315937 kubelet[2501]: I0116 21:18:40.315822 2501 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 16 21:18:40.324127 kubelet[2501]: I0116 21:18:40.323985 2501 state_mem.go:36] "Initialized new in-memory state store" Jan 16 21:18:40.324576 kubelet[2501]: I0116 21:18:40.324466 2501 kubelet.go:475] "Attempting to sync node with API server" Jan 16 21:18:40.325396 kubelet[2501]: I0116 21:18:40.324715 2501 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 21:18:40.325396 kubelet[2501]: I0116 21:18:40.324811 2501 kubelet.go:387] "Adding apiserver pod source" Jan 16 21:18:40.325396 kubelet[2501]: I0116 21:18:40.324835 2501 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 21:18:40.327568 kubelet[2501]: E0116 21:18:40.326050 2501 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 16 21:18:40.327568 kubelet[2501]: E0116 21:18:40.326884 2501 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 16 21:18:40.339772 kubelet[2501]: I0116 21:18:40.336772 2501 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 16 21:18:40.345404 kubelet[2501]: I0116 21:18:40.345044 2501 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 16 21:18:40.345971 kubelet[2501]: I0116 21:18:40.345724 2501 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 16 21:18:40.348911 kubelet[2501]: W0116 21:18:40.348524 2501 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 16 21:18:40.369829 kubelet[2501]: I0116 21:18:40.369729 2501 server.go:1262] "Started kubelet" Jan 16 21:18:40.370651 kubelet[2501]: I0116 21:18:40.370490 2501 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 21:18:40.372101 kubelet[2501]: I0116 21:18:40.371992 2501 server.go:310] "Adding debug handlers to kubelet server" Jan 16 21:18:40.378634 kubelet[2501]: I0116 21:18:40.374421 2501 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 21:18:40.378634 kubelet[2501]: I0116 21:18:40.374492 2501 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 16 21:18:40.378634 kubelet[2501]: I0116 21:18:40.374849 2501 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 21:18:40.378913 kubelet[2501]: I0116 21:18:40.378897 2501 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 21:18:40.384522 kubelet[2501]: I0116 21:18:40.382737 2501 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 16 21:18:40.384522 kubelet[2501]: E0116 21:18:40.384233 2501 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 16 21:18:40.384522 kubelet[2501]: I0116 21:18:40.384412 2501 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 16 21:18:40.385049 kubelet[2501]: I0116 21:18:40.384907 2501 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 16 21:18:40.385578 kubelet[2501]: I0116 21:18:40.385390 2501 reconciler.go:29] "Reconciler: start to sync state" Jan 16 21:18:40.389495 kubelet[2501]: E0116 21:18:40.386693 2501 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 16 21:18:40.394466 kubelet[2501]: I0116 21:18:40.391099 2501 factory.go:223] Registration of the systemd container factory successfully Jan 16 21:18:40.394466 kubelet[2501]: I0116 21:18:40.392087 2501 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 21:18:40.394578 kubelet[2501]: E0116 21:18:40.389917 2501 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.34:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.34:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188b52cd4bfcf087 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-16 21:18:40.369692807 +0000 UTC m=+2.820992327,LastTimestamp:2026-01-16 21:18:40.369692807 +0000 UTC m=+2.820992327,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 16 21:18:40.400354 kubelet[2501]: I0116 21:18:40.399898 2501 factory.go:223] Registration of the containerd container factory successfully Jan 16 21:18:40.400644 kubelet[2501]: E0116 21:18:40.400469 2501 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="200ms" Jan 16 21:18:40.404847 kubelet[2501]: E0116 21:18:40.404642 2501 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 16 21:18:40.440000 audit[2521]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2521 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:40.451810 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 16 21:18:40.452470 kernel: audit: type=1325 audit(1768598320.440:331): table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2521 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:40.440000 audit[2521]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd4fb62c70 a2=0 a3=0 items=0 ppid=2501 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:40.487574 kubelet[2501]: E0116 21:18:40.484641 2501 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 16 21:18:40.516216 kernel: audit: type=1300 audit(1768598320.440:331): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd4fb62c70 a2=0 a3=0 items=0 ppid=2501 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:40.516670 kernel: audit: type=1327 audit(1768598320.440:331): proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 16 21:18:40.440000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 16 21:18:40.531400 kernel: audit: type=1325 audit(1768598320.444:332): table=filter:43 family=2 entries=1 op=nft_register_chain pid=2522 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:40.444000 audit[2522]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2522 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:40.551568 kernel: audit: type=1300 audit(1768598320.444:332): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffda17f7b70 a2=0 a3=0 items=0 ppid=2501 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:40.444000 audit[2522]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffda17f7b70 a2=0 a3=0 items=0 ppid=2501 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:40.588583 kubelet[2501]: E0116 21:18:40.587700 2501 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 16 21:18:40.444000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 16 21:18:40.608974 kernel: audit: type=1327 audit(1768598320.444:332): proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 16 21:18:40.613901 kubelet[2501]: E0116 21:18:40.613442 2501 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="400ms" Jan 16 21:18:40.486000 audit[2524]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2524 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:40.486000 audit[2524]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd1a437450 a2=0 a3=0 items=0 ppid=2501 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:40.686473 kubelet[2501]: I0116 21:18:40.685695 2501 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 16 21:18:40.686473 kubelet[2501]: I0116 21:18:40.685733 2501 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 16 21:18:40.686473 kubelet[2501]: I0116 21:18:40.685798 2501 state_mem.go:36] "Initialized new in-memory state store" Jan 16 21:18:40.687888 kubelet[2501]: E0116 21:18:40.687867 2501 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 16 21:18:40.692226 kernel: audit: type=1325 audit(1768598320.486:333): table=filter:44 family=2 entries=2 op=nft_register_chain pid=2524 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:40.692447 kernel: audit: type=1300 audit(1768598320.486:333): arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd1a437450 a2=0 a3=0 items=0 ppid=2501 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:40.692485 kubelet[2501]: I0116 21:18:40.692436 2501 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 16 21:18:40.486000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 16 21:18:40.701516 kubelet[2501]: I0116 21:18:40.700409 2501 policy_none.go:49] "None policy: Start" Jan 16 21:18:40.703125 kubelet[2501]: I0116 21:18:40.702589 2501 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 16 21:18:40.703546 kubelet[2501]: I0116 21:18:40.703527 2501 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 16 21:18:40.704896 kubelet[2501]: I0116 21:18:40.704826 2501 kubelet.go:2427] "Starting kubelet main sync loop" Jan 16 21:18:40.525000 audit[2526]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2526 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:40.726505 kubelet[2501]: E0116 21:18:40.725802 2501 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 21:18:40.726505 kubelet[2501]: I0116 21:18:40.725957 2501 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 16 21:18:40.726505 kubelet[2501]: I0116 21:18:40.725991 2501 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 16 21:18:40.735621 kernel: audit: type=1327 audit(1768598320.486:333): proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 16 21:18:40.736037 kernel: audit: type=1325 audit(1768598320.525:334): table=filter:45 family=2 entries=2 op=nft_register_chain pid=2526 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:40.525000 audit[2526]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffacf0a4e0 a2=0 a3=0 items=0 ppid=2501 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:40.525000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 16 21:18:40.659000 audit[2531]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2531 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:40.659000 audit[2531]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffed44a02a0 a2=0 a3=0 items=0 ppid=2501 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:40.659000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Jan 16 21:18:40.694000 audit[2534]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2534 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:40.694000 audit[2534]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffa3e77430 a2=0 a3=0 items=0 ppid=2501 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:40.694000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 16 21:18:40.705000 audit[2535]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2535 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:40.705000 audit[2535]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd1e6e2c00 a2=0 a3=0 items=0 ppid=2501 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:40.705000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 16 21:18:40.714000 audit[2537]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2537 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:40.714000 audit[2537]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffec629f860 a2=0 a3=0 items=0 ppid=2501 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:40.714000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 16 21:18:40.714000 audit[2536]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2536 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:40.714000 audit[2536]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc5aad7030 a2=0 a3=0 items=0 ppid=2501 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:40.714000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 16 21:18:40.739000 audit[2539]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2539 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:18:40.739000 audit[2539]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe2e38a540 a2=0 a3=0 items=0 ppid=2501 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:40.739000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 16 21:18:40.741000 audit[2538]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2538 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:40.741000 audit[2538]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd93b0c20 a2=0 a3=0 items=0 ppid=2501 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:40.741000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 16 21:18:40.768416 kubelet[2501]: E0116 21:18:40.738087 2501 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 16 21:18:40.773976 kubelet[2501]: I0116 21:18:40.773842 2501 policy_none.go:47] "Start" Jan 16 21:18:40.773000 audit[2541]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2541 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:18:40.773000 audit[2541]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffb6eb93a0 a2=0 a3=0 items=0 ppid=2501 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:40.773000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 16 21:18:40.790619 kubelet[2501]: E0116 21:18:40.788701 2501 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 16 21:18:40.835623 kubelet[2501]: E0116 21:18:40.831035 2501 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 16 21:18:40.870559 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 16 21:18:40.894624 kubelet[2501]: E0116 21:18:40.894489 2501 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 16 21:18:40.922920 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 16 21:18:40.937889 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 16 21:18:40.980703 kubelet[2501]: E0116 21:18:40.979415 2501 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 16 21:18:40.981620 kubelet[2501]: I0116 21:18:40.980915 2501 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 16 21:18:40.981620 kubelet[2501]: I0116 21:18:40.981078 2501 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 16 21:18:40.985717 kubelet[2501]: I0116 21:18:40.982607 2501 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 21:18:41.005644 kubelet[2501]: E0116 21:18:41.003528 2501 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 16 21:18:41.005644 kubelet[2501]: E0116 21:18:41.003771 2501 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 16 21:18:41.018539 kubelet[2501]: E0116 21:18:41.018012 2501 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="800ms" Jan 16 21:18:41.094078 kubelet[2501]: I0116 21:18:41.093467 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/62f81c84c8709343deef330ace8da2cf-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"62f81c84c8709343deef330ace8da2cf\") " pod="kube-system/kube-apiserver-localhost" Jan 16 21:18:41.094078 kubelet[2501]: I0116 21:18:41.093589 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/62f81c84c8709343deef330ace8da2cf-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"62f81c84c8709343deef330ace8da2cf\") " pod="kube-system/kube-apiserver-localhost" Jan 16 21:18:41.094078 kubelet[2501]: I0116 21:18:41.093888 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/62f81c84c8709343deef330ace8da2cf-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"62f81c84c8709343deef330ace8da2cf\") " pod="kube-system/kube-apiserver-localhost" Jan 16 21:18:41.096604 kubelet[2501]: I0116 21:18:41.096449 2501 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 16 21:18:41.106833 kubelet[2501]: E0116 21:18:41.106004 2501 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Jan 16 21:18:41.191673 systemd[1]: Created slice kubepods-burstable-pod62f81c84c8709343deef330ace8da2cf.slice - libcontainer container kubepods-burstable-pod62f81c84c8709343deef330ace8da2cf.slice. Jan 16 21:18:41.195456 kubelet[2501]: I0116 21:18:41.194917 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 16 21:18:41.195456 kubelet[2501]: I0116 21:18:41.194961 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 16 21:18:41.195456 kubelet[2501]: I0116 21:18:41.194986 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 16 21:18:41.195456 kubelet[2501]: I0116 21:18:41.195008 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 16 21:18:41.195456 kubelet[2501]: I0116 21:18:41.195033 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 16 21:18:41.195816 kubelet[2501]: I0116 21:18:41.195508 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 16 21:18:41.235115 kubelet[2501]: E0116 21:18:41.234926 2501 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 16 21:18:41.243647 kubelet[2501]: E0116 21:18:41.239964 2501 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 16 21:18:41.273020 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Jan 16 21:18:41.274652 kubelet[2501]: E0116 21:18:41.274617 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:18:41.285365 containerd[1592]: time="2026-01-16T21:18:41.283118388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:62f81c84c8709343deef330ace8da2cf,Namespace:kube-system,Attempt:0,}" Jan 16 21:18:41.301453 kubelet[2501]: E0116 21:18:41.300944 2501 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 16 21:18:41.310578 kubelet[2501]: E0116 21:18:41.302932 2501 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 16 21:18:41.324960 kubelet[2501]: I0116 21:18:41.323855 2501 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 16 21:18:41.324960 kubelet[2501]: E0116 21:18:41.324600 2501 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Jan 16 21:18:41.337555 kubelet[2501]: E0116 21:18:41.335523 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:18:41.353197 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Jan 16 21:18:41.389758 containerd[1592]: time="2026-01-16T21:18:41.351078234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Jan 16 21:18:41.403503 kubelet[2501]: E0116 21:18:41.402106 2501 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 16 21:18:41.426815 kubelet[2501]: E0116 21:18:41.423706 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:18:41.430368 kubelet[2501]: E0116 21:18:41.430105 2501 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 16 21:18:41.434599 containerd[1592]: time="2026-01-16T21:18:41.433497068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Jan 16 21:18:41.736748 kubelet[2501]: E0116 21:18:41.733200 2501 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 16 21:18:41.747669 kubelet[2501]: I0116 21:18:41.745604 2501 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 16 21:18:41.750580 kubelet[2501]: E0116 21:18:41.749908 2501 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Jan 16 21:18:41.821822 kubelet[2501]: E0116 21:18:41.820749 2501 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="1.6s" Jan 16 21:18:42.373493 kubelet[2501]: E0116 21:18:42.372840 2501 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 16 21:18:42.576769 kubelet[2501]: I0116 21:18:42.576650 2501 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 16 21:18:42.580048 kubelet[2501]: E0116 21:18:42.580015 2501 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Jan 16 21:18:42.605713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3380980762.mount: Deactivated successfully. Jan 16 21:18:42.696507 containerd[1592]: time="2026-01-16T21:18:42.692712872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 21:18:42.696507 containerd[1592]: time="2026-01-16T21:18:42.695877059Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=2405" Jan 16 21:18:42.709967 containerd[1592]: time="2026-01-16T21:18:42.708461253Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 21:18:42.739438 containerd[1592]: time="2026-01-16T21:18:42.738820342Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 21:18:42.747897 containerd[1592]: time="2026-01-16T21:18:42.745826136Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 21:18:42.753182 containerd[1592]: time="2026-01-16T21:18:42.752899203Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 16 21:18:42.767871 containerd[1592]: time="2026-01-16T21:18:42.767834280Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 16 21:18:42.770918 containerd[1592]: time="2026-01-16T21:18:42.769867968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 21:18:42.776674 containerd[1592]: time="2026-01-16T21:18:42.775776904Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.362977461s" Jan 16 21:18:42.780184 containerd[1592]: time="2026-01-16T21:18:42.779885790Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.444776856s" Jan 16 21:18:42.799590 containerd[1592]: time="2026-01-16T21:18:42.799180732Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.320526921s" Jan 16 21:18:43.104493 containerd[1592]: time="2026-01-16T21:18:43.094230511Z" level=info msg="connecting to shim 433f28f69dda3008a3e6d7c1187fab536fd6264fe34d4bfce71ab515d464cbab" address="unix:///run/containerd/s/55886c6d6dda8f6b54514f0ba013349f01a480d0c0265d9841a66c270d10e816" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:18:43.108075 containerd[1592]: time="2026-01-16T21:18:43.094234301Z" level=info msg="connecting to shim 8fcb1fb81dd9d07259aeb8924064bcb7cc3c4e10a7afc68f65a6c19e9f573024" address="unix:///run/containerd/s/56166cc87c596ea5c1d821e73f50abe9d7da7261dbb81d57f7c349a558c938c1" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:18:43.157952 containerd[1592]: time="2026-01-16T21:18:43.136988847Z" level=info msg="connecting to shim 2fabdf91645dbbc66eaf02214742d15b132773e267f7e46ccc205741a4f6e8b8" address="unix:///run/containerd/s/dc5ab1dcaf0bf9c371d453b39963cb33ed53d1977d8f13a2d7ecbc9c04a2edca" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:18:43.277456 kubelet[2501]: E0116 21:18:43.277227 2501 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 16 21:18:43.339229 systemd[1]: Started cri-containerd-8fcb1fb81dd9d07259aeb8924064bcb7cc3c4e10a7afc68f65a6c19e9f573024.scope - libcontainer container 8fcb1fb81dd9d07259aeb8924064bcb7cc3c4e10a7afc68f65a6c19e9f573024. Jan 16 21:18:43.396029 systemd[1]: Started cri-containerd-2fabdf91645dbbc66eaf02214742d15b132773e267f7e46ccc205741a4f6e8b8.scope - libcontainer container 2fabdf91645dbbc66eaf02214742d15b132773e267f7e46ccc205741a4f6e8b8. Jan 16 21:18:43.410911 systemd[1]: Started cri-containerd-433f28f69dda3008a3e6d7c1187fab536fd6264fe34d4bfce71ab515d464cbab.scope - libcontainer container 433f28f69dda3008a3e6d7c1187fab536fd6264fe34d4bfce71ab515d464cbab. Jan 16 21:18:43.419000 audit: BPF prog-id=83 op=LOAD Jan 16 21:18:43.421000 audit: BPF prog-id=84 op=LOAD Jan 16 21:18:43.421000 audit[2604]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2571 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866636231666238316464396430373235396165623839323430363462 Jan 16 21:18:43.422000 audit: BPF prog-id=84 op=UNLOAD Jan 16 21:18:43.422000 audit[2604]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2571 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866636231666238316464396430373235396165623839323430363462 Jan 16 21:18:43.423000 audit: BPF prog-id=85 op=LOAD Jan 16 21:18:43.426466 kubelet[2501]: E0116 21:18:43.425678 2501 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="3.2s" Jan 16 21:18:43.423000 audit[2604]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2571 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866636231666238316464396430373235396165623839323430363462 Jan 16 21:18:43.426000 audit: BPF prog-id=86 op=LOAD Jan 16 21:18:43.426000 audit[2604]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2571 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.426000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866636231666238316464396430373235396165623839323430363462 Jan 16 21:18:43.426000 audit: BPF prog-id=86 op=UNLOAD Jan 16 21:18:43.426000 audit[2604]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2571 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.426000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866636231666238316464396430373235396165623839323430363462 Jan 16 21:18:43.428000 audit: BPF prog-id=85 op=UNLOAD Jan 16 21:18:43.428000 audit[2604]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2571 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866636231666238316464396430373235396165623839323430363462 Jan 16 21:18:43.428000 audit: BPF prog-id=87 op=LOAD Jan 16 21:18:43.428000 audit[2604]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2571 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866636231666238316464396430373235396165623839323430363462 Jan 16 21:18:43.465000 audit: BPF prog-id=88 op=LOAD Jan 16 21:18:43.469000 audit: BPF prog-id=89 op=LOAD Jan 16 21:18:43.469000 audit[2603]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2573 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266616264663931363435646262633636656166303232313437343264 Jan 16 21:18:43.469000 audit: BPF prog-id=89 op=UNLOAD Jan 16 21:18:43.469000 audit[2603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266616264663931363435646262633636656166303232313437343264 Jan 16 21:18:43.470000 audit: BPF prog-id=90 op=LOAD Jan 16 21:18:43.470000 audit[2603]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2573 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266616264663931363435646262633636656166303232313437343264 Jan 16 21:18:43.470000 audit: BPF prog-id=91 op=LOAD Jan 16 21:18:43.470000 audit[2603]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2573 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266616264663931363435646262633636656166303232313437343264 Jan 16 21:18:43.470000 audit: BPF prog-id=91 op=UNLOAD Jan 16 21:18:43.470000 audit[2603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266616264663931363435646262633636656166303232313437343264 Jan 16 21:18:43.470000 audit: BPF prog-id=90 op=UNLOAD Jan 16 21:18:43.470000 audit[2603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266616264663931363435646262633636656166303232313437343264 Jan 16 21:18:43.470000 audit: BPF prog-id=92 op=LOAD Jan 16 21:18:43.470000 audit[2603]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2573 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266616264663931363435646262633636656166303232313437343264 Jan 16 21:18:43.474000 audit: BPF prog-id=93 op=LOAD Jan 16 21:18:43.477000 audit: BPF prog-id=94 op=LOAD Jan 16 21:18:43.477000 audit[2612]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2565 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.477000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433336632386636396464613330303861336536643763313138376661 Jan 16 21:18:43.477000 audit: BPF prog-id=94 op=UNLOAD Jan 16 21:18:43.477000 audit[2612]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2565 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.477000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433336632386636396464613330303861336536643763313138376661 Jan 16 21:18:43.478000 audit: BPF prog-id=95 op=LOAD Jan 16 21:18:43.478000 audit[2612]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2565 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433336632386636396464613330303861336536643763313138376661 Jan 16 21:18:43.478000 audit: BPF prog-id=96 op=LOAD Jan 16 21:18:43.478000 audit[2612]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2565 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433336632386636396464613330303861336536643763313138376661 Jan 16 21:18:43.478000 audit: BPF prog-id=96 op=UNLOAD Jan 16 21:18:43.478000 audit[2612]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2565 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433336632386636396464613330303861336536643763313138376661 Jan 16 21:18:43.478000 audit: BPF prog-id=95 op=UNLOAD Jan 16 21:18:43.478000 audit[2612]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2565 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433336632386636396464613330303861336536643763313138376661 Jan 16 21:18:43.478000 audit: BPF prog-id=97 op=LOAD Jan 16 21:18:43.478000 audit[2612]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2565 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433336632386636396464613330303861336536643763313138376661 Jan 16 21:18:43.580178 containerd[1592]: time="2026-01-16T21:18:43.580042984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fcb1fb81dd9d07259aeb8924064bcb7cc3c4e10a7afc68f65a6c19e9f573024\"" Jan 16 21:18:43.583379 kubelet[2501]: E0116 21:18:43.583045 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:18:43.605805 containerd[1592]: time="2026-01-16T21:18:43.605763136Z" level=info msg="CreateContainer within sandbox \"8fcb1fb81dd9d07259aeb8924064bcb7cc3c4e10a7afc68f65a6c19e9f573024\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 16 21:18:43.612441 containerd[1592]: time="2026-01-16T21:18:43.612032702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:62f81c84c8709343deef330ace8da2cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"433f28f69dda3008a3e6d7c1187fab536fd6264fe34d4bfce71ab515d464cbab\"" Jan 16 21:18:43.615487 kubelet[2501]: E0116 21:18:43.615052 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:18:43.616240 containerd[1592]: time="2026-01-16T21:18:43.615938703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fabdf91645dbbc66eaf02214742d15b132773e267f7e46ccc205741a4f6e8b8\"" Jan 16 21:18:43.617729 kubelet[2501]: E0116 21:18:43.617659 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:18:43.629825 containerd[1592]: time="2026-01-16T21:18:43.629709575Z" level=info msg="CreateContainer within sandbox \"2fabdf91645dbbc66eaf02214742d15b132773e267f7e46ccc205741a4f6e8b8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 16 21:18:43.637750 containerd[1592]: time="2026-01-16T21:18:43.637711719Z" level=info msg="CreateContainer within sandbox \"433f28f69dda3008a3e6d7c1187fab536fd6264fe34d4bfce71ab515d464cbab\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 16 21:18:43.648640 containerd[1592]: time="2026-01-16T21:18:43.648206751Z" level=info msg="Container 46d9793cd514cdcb451e24bf4f8f6c415a73afbeff686a053da07efe5dda71c0: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:18:43.700768 containerd[1592]: time="2026-01-16T21:18:43.700627044Z" level=info msg="Container b6756513308ec6e6c6496d420bff733fcbe83aebf983d8249a7e97a2f184bfc2: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:18:43.709935 containerd[1592]: time="2026-01-16T21:18:43.709671534Z" level=info msg="CreateContainer within sandbox \"8fcb1fb81dd9d07259aeb8924064bcb7cc3c4e10a7afc68f65a6c19e9f573024\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"46d9793cd514cdcb451e24bf4f8f6c415a73afbeff686a053da07efe5dda71c0\"" Jan 16 21:18:43.717869 containerd[1592]: time="2026-01-16T21:18:43.717757192Z" level=info msg="StartContainer for \"46d9793cd514cdcb451e24bf4f8f6c415a73afbeff686a053da07efe5dda71c0\"" Jan 16 21:18:43.721039 containerd[1592]: time="2026-01-16T21:18:43.720887194Z" level=info msg="Container 734a6dbe22161be945e5fb61f5c369e623535b817de15fa0572ddb43c292ff81: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:18:43.729329 containerd[1592]: time="2026-01-16T21:18:43.727046755Z" level=info msg="connecting to shim 46d9793cd514cdcb451e24bf4f8f6c415a73afbeff686a053da07efe5dda71c0" address="unix:///run/containerd/s/56166cc87c596ea5c1d821e73f50abe9d7da7261dbb81d57f7c349a558c938c1" protocol=ttrpc version=3 Jan 16 21:18:43.735447 containerd[1592]: time="2026-01-16T21:18:43.731944312Z" level=info msg="CreateContainer within sandbox \"2fabdf91645dbbc66eaf02214742d15b132773e267f7e46ccc205741a4f6e8b8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b6756513308ec6e6c6496d420bff733fcbe83aebf983d8249a7e97a2f184bfc2\"" Jan 16 21:18:43.742451 containerd[1592]: time="2026-01-16T21:18:43.738517423Z" level=info msg="StartContainer for \"b6756513308ec6e6c6496d420bff733fcbe83aebf983d8249a7e97a2f184bfc2\"" Jan 16 21:18:43.748423 containerd[1592]: time="2026-01-16T21:18:43.747798551Z" level=info msg="connecting to shim b6756513308ec6e6c6496d420bff733fcbe83aebf983d8249a7e97a2f184bfc2" address="unix:///run/containerd/s/dc5ab1dcaf0bf9c371d453b39963cb33ed53d1977d8f13a2d7ecbc9c04a2edca" protocol=ttrpc version=3 Jan 16 21:18:43.776608 containerd[1592]: time="2026-01-16T21:18:43.776422361Z" level=info msg="CreateContainer within sandbox \"433f28f69dda3008a3e6d7c1187fab536fd6264fe34d4bfce71ab515d464cbab\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"734a6dbe22161be945e5fb61f5c369e623535b817de15fa0572ddb43c292ff81\"" Jan 16 21:18:43.778536 containerd[1592]: time="2026-01-16T21:18:43.777216343Z" level=info msg="StartContainer for \"734a6dbe22161be945e5fb61f5c369e623535b817de15fa0572ddb43c292ff81\"" Jan 16 21:18:43.783828 containerd[1592]: time="2026-01-16T21:18:43.783410567Z" level=info msg="connecting to shim 734a6dbe22161be945e5fb61f5c369e623535b817de15fa0572ddb43c292ff81" address="unix:///run/containerd/s/55886c6d6dda8f6b54514f0ba013349f01a480d0c0265d9841a66c270d10e816" protocol=ttrpc version=3 Jan 16 21:18:43.816671 systemd[1]: Started cri-containerd-46d9793cd514cdcb451e24bf4f8f6c415a73afbeff686a053da07efe5dda71c0.scope - libcontainer container 46d9793cd514cdcb451e24bf4f8f6c415a73afbeff686a053da07efe5dda71c0. Jan 16 21:18:43.836850 systemd[1]: Started cri-containerd-b6756513308ec6e6c6496d420bff733fcbe83aebf983d8249a7e97a2f184bfc2.scope - libcontainer container b6756513308ec6e6c6496d420bff733fcbe83aebf983d8249a7e97a2f184bfc2. Jan 16 21:18:43.857434 kubelet[2501]: E0116 21:18:43.856546 2501 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 16 21:18:43.888826 systemd[1]: Started cri-containerd-734a6dbe22161be945e5fb61f5c369e623535b817de15fa0572ddb43c292ff81.scope - libcontainer container 734a6dbe22161be945e5fb61f5c369e623535b817de15fa0572ddb43c292ff81. Jan 16 21:18:43.890845 kubelet[2501]: E0116 21:18:43.890768 2501 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 16 21:18:43.909000 audit: BPF prog-id=98 op=LOAD Jan 16 21:18:43.911000 audit: BPF prog-id=99 op=LOAD Jan 16 21:18:43.911000 audit[2686]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2571 pid=2686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436643937393363643531346364636234353165323462663466386636 Jan 16 21:18:43.911000 audit: BPF prog-id=99 op=UNLOAD Jan 16 21:18:43.911000 audit[2686]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2571 pid=2686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436643937393363643531346364636234353165323462663466386636 Jan 16 21:18:43.918000 audit: BPF prog-id=100 op=LOAD Jan 16 21:18:43.918000 audit[2686]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2571 pid=2686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.918000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436643937393363643531346364636234353165323462663466386636 Jan 16 21:18:43.919000 audit: BPF prog-id=101 op=LOAD Jan 16 21:18:43.919000 audit[2686]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2571 pid=2686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.919000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436643937393363643531346364636234353165323462663466386636 Jan 16 21:18:43.919000 audit: BPF prog-id=101 op=UNLOAD Jan 16 21:18:43.919000 audit[2686]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2571 pid=2686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.919000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436643937393363643531346364636234353165323462663466386636 Jan 16 21:18:43.919000 audit: BPF prog-id=100 op=UNLOAD Jan 16 21:18:43.919000 audit[2686]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2571 pid=2686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.919000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436643937393363643531346364636234353165323462663466386636 Jan 16 21:18:43.919000 audit: BPF prog-id=102 op=LOAD Jan 16 21:18:43.919000 audit[2686]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2571 pid=2686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.919000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436643937393363643531346364636234353165323462663466386636 Jan 16 21:18:43.927000 audit: BPF prog-id=103 op=LOAD Jan 16 21:18:43.932000 audit: BPF prog-id=104 op=LOAD Jan 16 21:18:43.932000 audit[2692]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=2573 pid=2692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.932000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236373536353133333038656336653663363439366434323062666637 Jan 16 21:18:43.932000 audit: BPF prog-id=104 op=UNLOAD Jan 16 21:18:43.932000 audit[2692]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=2692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.932000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236373536353133333038656336653663363439366434323062666637 Jan 16 21:18:43.933000 audit: BPF prog-id=105 op=LOAD Jan 16 21:18:43.933000 audit[2692]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2573 pid=2692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.933000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236373536353133333038656336653663363439366434323062666637 Jan 16 21:18:43.934000 audit: BPF prog-id=106 op=LOAD Jan 16 21:18:43.934000 audit[2692]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=2573 pid=2692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236373536353133333038656336653663363439366434323062666637 Jan 16 21:18:43.935000 audit: BPF prog-id=106 op=UNLOAD Jan 16 21:18:43.935000 audit[2692]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=2692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236373536353133333038656336653663363439366434323062666637 Jan 16 21:18:43.935000 audit: BPF prog-id=105 op=UNLOAD Jan 16 21:18:43.935000 audit[2692]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=2692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236373536353133333038656336653663363439366434323062666637 Jan 16 21:18:43.936000 audit: BPF prog-id=107 op=LOAD Jan 16 21:18:43.936000 audit[2692]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=2573 pid=2692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236373536353133333038656336653663363439366434323062666637 Jan 16 21:18:43.965000 audit: BPF prog-id=108 op=LOAD Jan 16 21:18:43.966000 audit: BPF prog-id=109 op=LOAD Jan 16 21:18:43.966000 audit[2711]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2565 pid=2711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733346136646265323231363162653934356535666236316635633336 Jan 16 21:18:43.966000 audit: BPF prog-id=109 op=UNLOAD Jan 16 21:18:43.966000 audit[2711]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2565 pid=2711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733346136646265323231363162653934356535666236316635633336 Jan 16 21:18:43.966000 audit: BPF prog-id=110 op=LOAD Jan 16 21:18:43.966000 audit[2711]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2565 pid=2711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733346136646265323231363162653934356535666236316635633336 Jan 16 21:18:43.967000 audit: BPF prog-id=111 op=LOAD Jan 16 21:18:43.967000 audit[2711]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2565 pid=2711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733346136646265323231363162653934356535666236316635633336 Jan 16 21:18:43.967000 audit: BPF prog-id=111 op=UNLOAD Jan 16 21:18:43.967000 audit[2711]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2565 pid=2711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733346136646265323231363162653934356535666236316635633336 Jan 16 21:18:43.967000 audit: BPF prog-id=110 op=UNLOAD Jan 16 21:18:43.967000 audit[2711]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2565 pid=2711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733346136646265323231363162653934356535666236316635633336 Jan 16 21:18:43.967000 audit: BPF prog-id=112 op=LOAD Jan 16 21:18:43.967000 audit[2711]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2565 pid=2711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:18:43.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733346136646265323231363162653934356535666236316635633336 Jan 16 21:18:44.095899 containerd[1592]: time="2026-01-16T21:18:44.095550091Z" level=info msg="StartContainer for \"46d9793cd514cdcb451e24bf4f8f6c415a73afbeff686a053da07efe5dda71c0\" returns successfully" Jan 16 21:18:44.104447 containerd[1592]: time="2026-01-16T21:18:44.103985266Z" level=info msg="StartContainer for \"734a6dbe22161be945e5fb61f5c369e623535b817de15fa0572ddb43c292ff81\" returns successfully" Jan 16 21:18:44.115755 containerd[1592]: time="2026-01-16T21:18:44.115532374Z" level=info msg="StartContainer for \"b6756513308ec6e6c6496d420bff733fcbe83aebf983d8249a7e97a2f184bfc2\" returns successfully" Jan 16 21:18:44.198927 kubelet[2501]: I0116 21:18:44.198841 2501 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 16 21:18:44.201599 kubelet[2501]: E0116 21:18:44.201546 2501 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Jan 16 21:18:44.538795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount229853688.mount: Deactivated successfully. Jan 16 21:18:44.946039 kubelet[2501]: E0116 21:18:44.946003 2501 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 16 21:18:44.952526 kubelet[2501]: E0116 21:18:44.952416 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:18:44.979550 kubelet[2501]: E0116 21:18:44.978527 2501 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 16 21:18:44.979550 kubelet[2501]: E0116 21:18:44.979420 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:18:44.980431 kubelet[2501]: E0116 21:18:44.980406 2501 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 16 21:18:44.981652 kubelet[2501]: E0116 21:18:44.981630 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:18:45.983446 kubelet[2501]: E0116 21:18:45.981958 2501 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 16 21:18:45.983446 kubelet[2501]: E0116 21:18:45.982209 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:18:45.983446 kubelet[2501]: E0116 21:18:45.983004 2501 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 16 21:18:45.989937 kubelet[2501]: E0116 21:18:45.983736 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:18:45.992978 kubelet[2501]: E0116 21:18:45.991493 2501 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 16 21:18:45.997626 kubelet[2501]: E0116 21:18:45.996422 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:18:46.989409 kubelet[2501]: E0116 21:18:46.987954 2501 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 16 21:18:46.992203 kubelet[2501]: E0116 21:18:46.991867 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:18:46.994551 kubelet[2501]: E0116 21:18:46.994114 2501 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 16 21:18:46.995227 kubelet[2501]: E0116 21:18:46.994949 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:18:46.999673 kubelet[2501]: E0116 21:18:46.998633 2501 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 16 21:18:46.999673 kubelet[2501]: E0116 21:18:46.999594 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:18:47.413981 kubelet[2501]: I0116 21:18:47.413714 2501 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 16 21:18:48.003214 kubelet[2501]: E0116 21:18:48.003184 2501 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 16 21:18:48.007108 kubelet[2501]: E0116 21:18:48.006441 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:18:48.487093 kubelet[2501]: E0116 21:18:48.486800 2501 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 16 21:18:48.487744 kubelet[2501]: E0116 21:18:48.487531 2501 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:18:48.658578 kubelet[2501]: E0116 21:18:48.658202 2501 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 16 21:18:48.771766 kubelet[2501]: E0116 21:18:48.770637 2501 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188b52cd4bfcf087 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-16 21:18:40.369692807 +0000 UTC m=+2.820992327,LastTimestamp:2026-01-16 21:18:40.369692807 +0000 UTC m=+2.820992327,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 16 21:18:48.860565 kubelet[2501]: E0116 21:18:48.860066 2501 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188b52cd4e11ee4a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-16 21:18:40.404622922 +0000 UTC m=+2.855922452,LastTimestamp:2026-01-16 21:18:40.404622922 +0000 UTC m=+2.855922452,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 16 21:18:48.904164 kubelet[2501]: I0116 21:18:48.903901 2501 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 16 21:18:48.943858 kubelet[2501]: E0116 21:18:48.942141 2501 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188b52cd5e963ab9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-16 21:18:40.681728697 +0000 UTC m=+3.133028217,LastTimestamp:2026-01-16 21:18:40.681728697 +0000 UTC m=+3.133028217,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 16 21:18:48.996784 kubelet[2501]: I0116 21:18:48.996462 2501 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 16 21:18:49.029660 kubelet[2501]: E0116 21:18:49.028592 2501 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 16 21:18:49.029660 kubelet[2501]: I0116 21:18:49.028634 2501 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 16 21:18:49.039914 kubelet[2501]: E0116 21:18:49.039815 2501 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 16 21:18:49.039914 kubelet[2501]: I0116 21:18:49.039852 2501 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 16 21:18:49.044744 kubelet[2501]: E0116 21:18:49.044602 2501 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 16 21:18:49.341832 kubelet[2501]: I0116 21:18:49.339636 2501 apiserver.go:52] "Watching apiserver" Jan 16 21:18:49.388474 kubelet[2501]: I0116 21:18:49.387830 2501 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 16 21:18:53.715033 systemd[1]: Reload requested from client PID 2799 ('systemctl') (unit session-8.scope)... Jan 16 21:18:53.715129 systemd[1]: Reloading... Jan 16 21:18:53.930446 zram_generator::config[2845]: No configuration found. Jan 16 21:18:54.690162 systemd[1]: Reloading finished in 973 ms. Jan 16 21:18:54.763504 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 21:18:54.789729 systemd[1]: kubelet.service: Deactivated successfully. Jan 16 21:18:54.790550 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:18:54.790721 systemd[1]: kubelet.service: Consumed 4.615s CPU time, 128.5M memory peak. Jan 16 21:18:54.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:18:54.799428 kernel: kauditd_printk_skb: 158 callbacks suppressed Jan 16 21:18:54.799508 kernel: audit: type=1131 audit(1768598334.789:391): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:18:54.798654 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 21:18:54.802000 audit: BPF prog-id=113 op=LOAD Jan 16 21:18:54.834751 kernel: audit: type=1334 audit(1768598334.802:392): prog-id=113 op=LOAD Jan 16 21:18:54.835019 kernel: audit: type=1334 audit(1768598334.802:393): prog-id=79 op=UNLOAD Jan 16 21:18:54.802000 audit: BPF prog-id=79 op=UNLOAD Jan 16 21:18:54.845662 kernel: audit: type=1334 audit(1768598334.809:394): prog-id=114 op=LOAD Jan 16 21:18:54.809000 audit: BPF prog-id=114 op=LOAD Jan 16 21:18:54.860228 kernel: audit: type=1334 audit(1768598334.809:395): prog-id=73 op=UNLOAD Jan 16 21:18:54.809000 audit: BPF prog-id=73 op=UNLOAD Jan 16 21:18:54.872034 kernel: audit: type=1334 audit(1768598334.809:396): prog-id=115 op=LOAD Jan 16 21:18:54.809000 audit: BPF prog-id=115 op=LOAD Jan 16 21:18:54.881154 kernel: audit: type=1334 audit(1768598334.809:397): prog-id=116 op=LOAD Jan 16 21:18:54.809000 audit: BPF prog-id=116 op=LOAD Jan 16 21:18:54.809000 audit: BPF prog-id=74 op=UNLOAD Jan 16 21:18:54.809000 audit: BPF prog-id=75 op=UNLOAD Jan 16 21:18:54.810000 audit: BPF prog-id=117 op=LOAD Jan 16 21:18:54.810000 audit: BPF prog-id=118 op=LOAD Jan 16 21:18:54.810000 audit: BPF prog-id=66 op=UNLOAD Jan 16 21:18:54.810000 audit: BPF prog-id=67 op=UNLOAD Jan 16 21:18:54.813000 audit: BPF prog-id=119 op=LOAD Jan 16 21:18:54.813000 audit: BPF prog-id=70 op=UNLOAD Jan 16 21:18:54.895409 kernel: audit: type=1334 audit(1768598334.809:398): prog-id=74 op=UNLOAD Jan 16 21:18:54.895452 kernel: audit: type=1334 audit(1768598334.809:399): prog-id=75 op=UNLOAD Jan 16 21:18:54.895476 kernel: audit: type=1334 audit(1768598334.810:400): prog-id=117 op=LOAD Jan 16 21:18:54.813000 audit: BPF prog-id=120 op=LOAD Jan 16 21:18:54.813000 audit: BPF prog-id=121 op=LOAD Jan 16 21:18:54.813000 audit: BPF prog-id=71 op=UNLOAD Jan 16 21:18:54.813000 audit: BPF prog-id=72 op=UNLOAD Jan 16 21:18:54.814000 audit: BPF prog-id=122 op=LOAD Jan 16 21:18:54.814000 audit: BPF prog-id=69 op=UNLOAD Jan 16 21:18:54.815000 audit: BPF prog-id=123 op=LOAD Jan 16 21:18:54.815000 audit: BPF prog-id=76 op=UNLOAD Jan 16 21:18:54.815000 audit: BPF prog-id=124 op=LOAD Jan 16 21:18:54.815000 audit: BPF prog-id=125 op=LOAD Jan 16 21:18:54.815000 audit: BPF prog-id=77 op=UNLOAD Jan 16 21:18:54.815000 audit: BPF prog-id=78 op=UNLOAD Jan 16 21:18:54.818000 audit: BPF prog-id=126 op=LOAD Jan 16 21:18:54.818000 audit: BPF prog-id=63 op=UNLOAD Jan 16 21:18:54.818000 audit: BPF prog-id=127 op=LOAD Jan 16 21:18:54.818000 audit: BPF prog-id=128 op=LOAD Jan 16 21:18:54.818000 audit: BPF prog-id=64 op=UNLOAD Jan 16 21:18:54.818000 audit: BPF prog-id=65 op=UNLOAD Jan 16 21:18:54.819000 audit: BPF prog-id=129 op=LOAD Jan 16 21:18:54.819000 audit: BPF prog-id=80 op=UNLOAD Jan 16 21:18:54.819000 audit: BPF prog-id=130 op=LOAD Jan 16 21:18:54.819000 audit: BPF prog-id=131 op=LOAD Jan 16 21:18:54.820000 audit: BPF prog-id=81 op=UNLOAD Jan 16 21:18:54.820000 audit: BPF prog-id=82 op=UNLOAD Jan 16 21:18:54.823000 audit: BPF prog-id=132 op=LOAD Jan 16 21:18:54.823000 audit: BPF prog-id=68 op=UNLOAD Jan 16 21:18:55.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:18:55.406121 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:18:55.428575 (kubelet)[2889]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 21:18:55.659546 kubelet[2889]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 16 21:18:55.659546 kubelet[2889]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 21:18:55.659546 kubelet[2889]: I0116 21:18:55.659121 2889 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 21:18:55.735543 kubelet[2889]: I0116 21:18:55.735230 2889 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 16 21:18:55.735740 kubelet[2889]: I0116 21:18:55.735720 2889 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 21:18:55.735963 kubelet[2889]: I0116 21:18:55.735852 2889 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 16 21:18:55.736088 kubelet[2889]: I0116 21:18:55.736065 2889 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 16 21:18:55.739434 kubelet[2889]: I0116 21:18:55.739231 2889 server.go:956] "Client rotation is on, will bootstrap in background" Jan 16 21:18:55.743161 kubelet[2889]: I0116 21:18:55.743144 2889 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 16 21:18:55.752181 kubelet[2889]: I0116 21:18:55.752151 2889 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 21:18:55.776653 kubelet[2889]: I0116 21:18:55.776465 2889 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 16 21:18:55.800547 kubelet[2889]: I0116 21:18:55.800514 2889 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 16 21:18:55.801709 kubelet[2889]: I0116 21:18:55.801662 2889 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 21:18:55.802158 kubelet[2889]: I0116 21:18:55.801958 2889 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 16 21:18:55.802664 kubelet[2889]: I0116 21:18:55.802646 2889 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 21:18:55.802761 kubelet[2889]: I0116 21:18:55.802748 2889 container_manager_linux.go:306] "Creating device plugin manager" Jan 16 21:18:55.802857 kubelet[2889]: I0116 21:18:55.802845 2889 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 16 21:18:55.805780 kubelet[2889]: I0116 21:18:55.805763 2889 state_mem.go:36] "Initialized new in-memory state store" Jan 16 21:18:55.806758 kubelet[2889]: I0116 21:18:55.806623 2889 kubelet.go:475] "Attempting to sync node with API server" Jan 16 21:18:55.806758 kubelet[2889]: I0116 21:18:55.806649 2889 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 21:18:55.806758 kubelet[2889]: I0116 21:18:55.806679 2889 kubelet.go:387] "Adding apiserver pod source" Jan 16 21:18:55.806758 kubelet[2889]: I0116 21:18:55.806699 2889 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 21:18:55.812528 kubelet[2889]: I0116 21:18:55.811648 2889 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 16 21:18:55.814010 kubelet[2889]: I0116 21:18:55.813551 2889 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 16 21:18:55.816574 kubelet[2889]: I0116 21:18:55.816556 2889 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 16 21:18:55.844642 kubelet[2889]: I0116 21:18:55.844569 2889 server.go:1262] "Started kubelet" Jan 16 21:18:55.850148 kubelet[2889]: I0116 21:18:55.849852 2889 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 21:18:55.857421 kubelet[2889]: I0116 21:18:55.853500 2889 server.go:310] "Adding debug handlers to kubelet server" Jan 16 21:18:55.857532 kubelet[2889]: I0116 21:18:55.857428 2889 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 21:18:55.860184 kubelet[2889]: I0116 21:18:55.860023 2889 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 21:18:55.860420 kubelet[2889]: I0116 21:18:55.860218 2889 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 16 21:18:55.861212 kubelet[2889]: I0116 21:18:55.861033 2889 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 21:18:55.871425 kubelet[2889]: I0116 21:18:55.870150 2889 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 16 21:18:55.883724 kubelet[2889]: I0116 21:18:55.882799 2889 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 16 21:18:55.883724 kubelet[2889]: I0116 21:18:55.883214 2889 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 16 21:18:55.893141 kubelet[2889]: I0116 21:18:55.892475 2889 reconciler.go:29] "Reconciler: start to sync state" Jan 16 21:18:55.893141 kubelet[2889]: I0116 21:18:55.893150 2889 factory.go:223] Registration of the systemd container factory successfully Jan 16 21:18:55.895574 kubelet[2889]: I0116 21:18:55.895205 2889 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 21:18:55.922060 kubelet[2889]: I0116 21:18:55.919483 2889 factory.go:223] Registration of the containerd container factory successfully Jan 16 21:18:55.922060 kubelet[2889]: E0116 21:18:55.920663 2889 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 16 21:18:56.041646 kubelet[2889]: I0116 21:18:56.039602 2889 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 16 21:18:56.053449 kubelet[2889]: I0116 21:18:56.052561 2889 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 16 21:18:56.053803 kubelet[2889]: I0116 21:18:56.053718 2889 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 16 21:18:56.053968 kubelet[2889]: I0116 21:18:56.053852 2889 kubelet.go:2427] "Starting kubelet main sync loop" Jan 16 21:18:56.054033 kubelet[2889]: E0116 21:18:56.054004 2889 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 21:18:56.082237 kubelet[2889]: I0116 21:18:56.081556 2889 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 16 21:18:56.082237 kubelet[2889]: I0116 21:18:56.081576 2889 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 16 21:18:56.082237 kubelet[2889]: I0116 21:18:56.081598 2889 state_mem.go:36] "Initialized new in-memory state store" Jan 16 21:18:56.082237 kubelet[2889]: I0116 21:18:56.081847 2889 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 16 21:18:56.082237 kubelet[2889]: I0116 21:18:56.081961 2889 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 16 21:18:56.082237 kubelet[2889]: I0116 21:18:56.081995 2889 policy_none.go:49] "None policy: Start" Jan 16 21:18:56.082237 kubelet[2889]: I0116 21:18:56.082006 2889 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 16 21:18:56.082237 kubelet[2889]: I0116 21:18:56.082018 2889 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 16 21:18:56.082237 kubelet[2889]: I0116 21:18:56.082137 2889 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 16 21:18:56.082237 kubelet[2889]: I0116 21:18:56.082148 2889 policy_none.go:47] "Start" Jan 16 21:18:56.111622 kubelet[2889]: E0116 21:18:56.108715 2889 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 16 21:18:56.111622 kubelet[2889]: I0116 21:18:56.109190 2889 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 16 21:18:56.111622 kubelet[2889]: I0116 21:18:56.109209 2889 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 16 21:18:56.117956 kubelet[2889]: I0116 21:18:56.114096 2889 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 21:18:56.117956 kubelet[2889]: E0116 21:18:56.116533 2889 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 16 21:18:56.174554 kubelet[2889]: I0116 21:18:56.172699 2889 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 16 21:18:56.193753 kubelet[2889]: I0116 21:18:56.173499 2889 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 16 21:18:56.193753 kubelet[2889]: I0116 21:18:56.173728 2889 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 16 21:18:56.301552 kubelet[2889]: I0116 21:18:56.301216 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 16 21:18:56.307569 kubelet[2889]: I0116 21:18:56.306996 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/62f81c84c8709343deef330ace8da2cf-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"62f81c84c8709343deef330ace8da2cf\") " pod="kube-system/kube-apiserver-localhost" Jan 16 21:18:56.307569 kubelet[2889]: I0116 21:18:56.307151 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/62f81c84c8709343deef330ace8da2cf-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"62f81c84c8709343deef330ace8da2cf\") " pod="kube-system/kube-apiserver-localhost" Jan 16 21:18:56.307569 kubelet[2889]: I0116 21:18:56.307184 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/62f81c84c8709343deef330ace8da2cf-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"62f81c84c8709343deef330ace8da2cf\") " pod="kube-system/kube-apiserver-localhost" Jan 16 21:18:56.307569 kubelet[2889]: I0116 21:18:56.307214 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 16 21:18:56.307569 kubelet[2889]: I0116 21:18:56.307240 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 16 21:18:56.307848 kubelet[2889]: I0116 21:18:56.307490 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 16 21:18:56.307848 kubelet[2889]: I0116 21:18:56.307511 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 16 21:18:56.307848 kubelet[2889]: I0116 21:18:56.307529 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 16 21:18:56.314603 kubelet[2889]: I0116 21:18:56.314572 2889 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 16 21:18:56.379200 kubelet[2889]: I0116 21:18:56.376062 2889 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 16 21:18:56.379200 kubelet[2889]: I0116 21:18:56.378138 2889 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 16 21:18:56.527468 kubelet[2889]: E0116 21:18:56.527436 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:18:56.532494 kubelet[2889]: E0116 21:18:56.532093 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:18:56.532494 kubelet[2889]: E0116 21:18:56.532421 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:18:56.816626 kubelet[2889]: I0116 21:18:56.815594 2889 apiserver.go:52] "Watching apiserver" Jan 16 21:18:56.890771 kubelet[2889]: I0116 21:18:56.890591 2889 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 16 21:18:57.194583 kubelet[2889]: I0116 21:18:57.193771 2889 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 16 21:18:57.194583 kubelet[2889]: E0116 21:18:57.194564 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:18:57.207823 kubelet[2889]: E0116 21:18:57.206598 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:18:57.230404 kubelet[2889]: E0116 21:18:57.227502 2889 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 16 21:18:57.230404 kubelet[2889]: E0116 21:18:57.227648 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:18:57.342040 kubelet[2889]: I0116 21:18:57.341162 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.340982498 podStartE2EDuration="1.340982498s" podCreationTimestamp="2026-01-16 21:18:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 21:18:57.291439155 +0000 UTC m=+1.849742126" watchObservedRunningTime="2026-01-16 21:18:57.340982498 +0000 UTC m=+1.899285459" Jan 16 21:18:57.342040 kubelet[2889]: I0116 21:18:57.341610 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.341593346 podStartE2EDuration="1.341593346s" podCreationTimestamp="2026-01-16 21:18:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 21:18:57.334442396 +0000 UTC m=+1.892745356" watchObservedRunningTime="2026-01-16 21:18:57.341593346 +0000 UTC m=+1.899896327" Jan 16 21:18:57.380610 kubelet[2889]: I0116 21:18:57.379617 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.379595995 podStartE2EDuration="1.379595995s" podCreationTimestamp="2026-01-16 21:18:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 21:18:57.375829689 +0000 UTC m=+1.934132649" watchObservedRunningTime="2026-01-16 21:18:57.379595995 +0000 UTC m=+1.937898956" Jan 16 21:18:58.210186 kubelet[2889]: E0116 21:18:58.207170 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:18:58.212723 kubelet[2889]: E0116 21:18:58.210969 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:19:00.409181 kubelet[2889]: I0116 21:19:00.408728 2889 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 16 21:19:00.429687 containerd[1592]: time="2026-01-16T21:19:00.429624838Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 16 21:19:00.433499 kubelet[2889]: I0116 21:19:00.430175 2889 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 16 21:19:01.350936 systemd[1]: Created slice kubepods-besteffort-pod5ceb0703_5f29_4104_ae36_51d7b384f72f.slice - libcontainer container kubepods-besteffort-pod5ceb0703_5f29_4104_ae36_51d7b384f72f.slice. Jan 16 21:19:01.398897 kubelet[2889]: I0116 21:19:01.396733 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5ceb0703-5f29-4104-ae36-51d7b384f72f-kube-proxy\") pod \"kube-proxy-cx8hg\" (UID: \"5ceb0703-5f29-4104-ae36-51d7b384f72f\") " pod="kube-system/kube-proxy-cx8hg" Jan 16 21:19:01.398897 kubelet[2889]: I0116 21:19:01.396782 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ceb0703-5f29-4104-ae36-51d7b384f72f-lib-modules\") pod \"kube-proxy-cx8hg\" (UID: \"5ceb0703-5f29-4104-ae36-51d7b384f72f\") " pod="kube-system/kube-proxy-cx8hg" Jan 16 21:19:01.402084 kubelet[2889]: I0116 21:19:01.401776 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ceb0703-5f29-4104-ae36-51d7b384f72f-xtables-lock\") pod \"kube-proxy-cx8hg\" (UID: \"5ceb0703-5f29-4104-ae36-51d7b384f72f\") " pod="kube-system/kube-proxy-cx8hg" Jan 16 21:19:01.403687 kubelet[2889]: I0116 21:19:01.402118 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gksnp\" (UniqueName: \"kubernetes.io/projected/5ceb0703-5f29-4104-ae36-51d7b384f72f-kube-api-access-gksnp\") pod \"kube-proxy-cx8hg\" (UID: \"5ceb0703-5f29-4104-ae36-51d7b384f72f\") " pod="kube-system/kube-proxy-cx8hg" Jan 16 21:19:01.577939 systemd[1]: Created slice kubepods-besteffort-pod6d6d0bd4_45d9_4570_ac1c_16981d2fce42.slice - libcontainer container kubepods-besteffort-pod6d6d0bd4_45d9_4570_ac1c_16981d2fce42.slice. Jan 16 21:19:01.608075 kubelet[2889]: I0116 21:19:01.606999 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49mg9\" (UniqueName: \"kubernetes.io/projected/6d6d0bd4-45d9-4570-ac1c-16981d2fce42-kube-api-access-49mg9\") pod \"tigera-operator-65cdcdfd6d-vd77m\" (UID: \"6d6d0bd4-45d9-4570-ac1c-16981d2fce42\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-vd77m" Jan 16 21:19:01.608075 kubelet[2889]: I0116 21:19:01.607044 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6d6d0bd4-45d9-4570-ac1c-16981d2fce42-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-vd77m\" (UID: \"6d6d0bd4-45d9-4570-ac1c-16981d2fce42\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-vd77m" Jan 16 21:19:01.699105 kubelet[2889]: E0116 21:19:01.698021 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:19:01.705548 containerd[1592]: time="2026-01-16T21:19:01.705137559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cx8hg,Uid:5ceb0703-5f29-4104-ae36-51d7b384f72f,Namespace:kube-system,Attempt:0,}" Jan 16 21:19:01.923088 containerd[1592]: time="2026-01-16T21:19:01.922678622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-vd77m,Uid:6d6d0bd4-45d9-4570-ac1c-16981d2fce42,Namespace:tigera-operator,Attempt:0,}" Jan 16 21:19:01.931068 containerd[1592]: time="2026-01-16T21:19:01.930101192Z" level=info msg="connecting to shim 00e37dcca0de6c16db377ca32b1c67f287fbdd2e3392bfff36a1952720aabc31" address="unix:///run/containerd/s/38812994db9cceb502f68621c0f231a3c6f6484ba4b1678474072740fe0a529a" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:19:02.046189 containerd[1592]: time="2026-01-16T21:19:02.045770770Z" level=info msg="connecting to shim af23b6414fffa27b1d181bc5ab66148a7ad6cec6e057b45661e0dee4f546dc4d" address="unix:///run/containerd/s/ad8ca06b5cd9b10e7a2a775f5f9c5ec276ae4b5fc6238c28e34f5c4af9762b8d" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:19:02.051077 kubelet[2889]: E0116 21:19:02.051044 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:19:02.152027 systemd[1]: Started cri-containerd-00e37dcca0de6c16db377ca32b1c67f287fbdd2e3392bfff36a1952720aabc31.scope - libcontainer container 00e37dcca0de6c16db377ca32b1c67f287fbdd2e3392bfff36a1952720aabc31. Jan 16 21:19:02.234476 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 16 21:19:02.234721 kernel: audit: type=1334 audit(1768598342.225:433): prog-id=133 op=LOAD Jan 16 21:19:02.225000 audit: BPF prog-id=133 op=LOAD Jan 16 21:19:02.230000 audit: BPF prog-id=134 op=LOAD Jan 16 21:19:02.255460 kubelet[2889]: E0116 21:19:02.254746 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:19:02.230000 audit[2969]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2958 pid=2969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:02.260555 systemd[1]: Started cri-containerd-af23b6414fffa27b1d181bc5ab66148a7ad6cec6e057b45661e0dee4f546dc4d.scope - libcontainer container af23b6414fffa27b1d181bc5ab66148a7ad6cec6e057b45661e0dee4f546dc4d. Jan 16 21:19:02.303734 kernel: audit: type=1334 audit(1768598342.230:434): prog-id=134 op=LOAD Jan 16 21:19:02.303941 kernel: audit: type=1300 audit(1768598342.230:434): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2958 pid=2969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:02.230000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030653337646363613064653663313664623337376361333262316336 Jan 16 21:19:02.348441 kernel: audit: type=1327 audit(1768598342.230:434): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030653337646363613064653663313664623337376361333262316336 Jan 16 21:19:02.230000 audit: BPF prog-id=134 op=UNLOAD Jan 16 21:19:02.361604 kernel: audit: type=1334 audit(1768598342.230:435): prog-id=134 op=UNLOAD Jan 16 21:19:02.230000 audit[2969]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2958 pid=2969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:02.230000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030653337646363613064653663313664623337376361333262316336 Jan 16 21:19:02.440493 kernel: audit: type=1300 audit(1768598342.230:435): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2958 pid=2969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:02.440619 kernel: audit: type=1327 audit(1768598342.230:435): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030653337646363613064653663313664623337376361333262316336 Jan 16 21:19:02.233000 audit: BPF prog-id=135 op=LOAD Jan 16 21:19:02.505177 kernel: audit: type=1334 audit(1768598342.233:436): prog-id=135 op=LOAD Jan 16 21:19:02.505984 kernel: audit: type=1300 audit(1768598342.233:436): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2958 pid=2969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:02.233000 audit[2969]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2958 pid=2969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:02.506210 kubelet[2889]: E0116 21:19:02.474758 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:19:02.506501 containerd[1592]: time="2026-01-16T21:19:02.470978805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cx8hg,Uid:5ceb0703-5f29-4104-ae36-51d7b384f72f,Namespace:kube-system,Attempt:0,} returns sandbox id \"00e37dcca0de6c16db377ca32b1c67f287fbdd2e3392bfff36a1952720aabc31\"" Jan 16 21:19:02.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030653337646363613064653663313664623337376361333262316336 Jan 16 21:19:02.571111 containerd[1592]: time="2026-01-16T21:19:02.524108546Z" level=info msg="CreateContainer within sandbox \"00e37dcca0de6c16db377ca32b1c67f287fbdd2e3392bfff36a1952720aabc31\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 16 21:19:02.233000 audit: BPF prog-id=136 op=LOAD Jan 16 21:19:02.571595 kernel: audit: type=1327 audit(1768598342.233:436): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030653337646363613064653663313664623337376361333262316336 Jan 16 21:19:02.233000 audit[2969]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2958 pid=2969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:02.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030653337646363613064653663313664623337376361333262316336 Jan 16 21:19:02.233000 audit: BPF prog-id=136 op=UNLOAD Jan 16 21:19:02.233000 audit[2969]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2958 pid=2969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:02.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030653337646363613064653663313664623337376361333262316336 Jan 16 21:19:02.233000 audit: BPF prog-id=135 op=UNLOAD Jan 16 21:19:02.233000 audit[2969]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2958 pid=2969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:02.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030653337646363613064653663313664623337376361333262316336 Jan 16 21:19:02.233000 audit: BPF prog-id=137 op=LOAD Jan 16 21:19:02.233000 audit[2969]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2958 pid=2969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:02.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030653337646363613064653663313664623337376361333262316336 Jan 16 21:19:02.375000 audit: BPF prog-id=138 op=LOAD Jan 16 21:19:02.395000 audit: BPF prog-id=139 op=LOAD Jan 16 21:19:02.395000 audit[3002]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2979 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:02.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166323362363431346666666132376231643138316263356162363631 Jan 16 21:19:02.395000 audit: BPF prog-id=139 op=UNLOAD Jan 16 21:19:02.395000 audit[3002]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2979 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:02.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166323362363431346666666132376231643138316263356162363631 Jan 16 21:19:02.396000 audit: BPF prog-id=140 op=LOAD Jan 16 21:19:02.396000 audit[3002]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2979 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:02.396000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166323362363431346666666132376231643138316263356162363631 Jan 16 21:19:02.396000 audit: BPF prog-id=141 op=LOAD Jan 16 21:19:02.396000 audit[3002]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2979 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:02.396000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166323362363431346666666132376231643138316263356162363631 Jan 16 21:19:02.396000 audit: BPF prog-id=141 op=UNLOAD Jan 16 21:19:02.396000 audit[3002]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2979 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:02.396000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166323362363431346666666132376231643138316263356162363631 Jan 16 21:19:02.396000 audit: BPF prog-id=140 op=UNLOAD Jan 16 21:19:02.396000 audit[3002]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2979 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:02.396000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166323362363431346666666132376231643138316263356162363631 Jan 16 21:19:02.397000 audit: BPF prog-id=142 op=LOAD Jan 16 21:19:02.397000 audit[3002]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2979 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:02.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166323362363431346666666132376231643138316263356162363631 Jan 16 21:19:02.658002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount232327570.mount: Deactivated successfully. Jan 16 21:19:02.678449 containerd[1592]: time="2026-01-16T21:19:02.677538206Z" level=info msg="Container adfc1a82e365df08cc65dcd28c3732f8f3e39c8ca8c6a526b9e0424975acb9ee: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:19:02.737567 containerd[1592]: time="2026-01-16T21:19:02.734010078Z" level=info msg="CreateContainer within sandbox \"00e37dcca0de6c16db377ca32b1c67f287fbdd2e3392bfff36a1952720aabc31\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"adfc1a82e365df08cc65dcd28c3732f8f3e39c8ca8c6a526b9e0424975acb9ee\"" Jan 16 21:19:02.741144 containerd[1592]: time="2026-01-16T21:19:02.740084227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-vd77m,Uid:6d6d0bd4-45d9-4570-ac1c-16981d2fce42,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"af23b6414fffa27b1d181bc5ab66148a7ad6cec6e057b45661e0dee4f546dc4d\"" Jan 16 21:19:02.741144 containerd[1592]: time="2026-01-16T21:19:02.740246597Z" level=info msg="StartContainer for \"adfc1a82e365df08cc65dcd28c3732f8f3e39c8ca8c6a526b9e0424975acb9ee\"" Jan 16 21:19:02.745113 containerd[1592]: time="2026-01-16T21:19:02.743091964Z" level=info msg="connecting to shim adfc1a82e365df08cc65dcd28c3732f8f3e39c8ca8c6a526b9e0424975acb9ee" address="unix:///run/containerd/s/38812994db9cceb502f68621c0f231a3c6f6484ba4b1678474072740fe0a529a" protocol=ttrpc version=3 Jan 16 21:19:02.779218 containerd[1592]: time="2026-01-16T21:19:02.773586094Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 16 21:19:02.881185 systemd[1]: Started cri-containerd-adfc1a82e365df08cc65dcd28c3732f8f3e39c8ca8c6a526b9e0424975acb9ee.scope - libcontainer container adfc1a82e365df08cc65dcd28c3732f8f3e39c8ca8c6a526b9e0424975acb9ee. Jan 16 21:19:03.130000 audit: BPF prog-id=143 op=LOAD Jan 16 21:19:03.130000 audit[3040]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=2958 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:03.130000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164666331613832653336356466303863633635646364323863333733 Jan 16 21:19:03.131000 audit: BPF prog-id=144 op=LOAD Jan 16 21:19:03.131000 audit[3040]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=2958 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:03.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164666331613832653336356466303863633635646364323863333733 Jan 16 21:19:03.131000 audit: BPF prog-id=144 op=UNLOAD Jan 16 21:19:03.131000 audit[3040]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2958 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:03.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164666331613832653336356466303863633635646364323863333733 Jan 16 21:19:03.131000 audit: BPF prog-id=143 op=UNLOAD Jan 16 21:19:03.131000 audit[3040]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2958 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:03.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164666331613832653336356466303863633635646364323863333733 Jan 16 21:19:03.131000 audit: BPF prog-id=145 op=LOAD Jan 16 21:19:03.131000 audit[3040]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=2958 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:03.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164666331613832653336356466303863633635646364323863333733 Jan 16 21:19:03.319209 containerd[1592]: time="2026-01-16T21:19:03.319012448Z" level=info msg="StartContainer for \"adfc1a82e365df08cc65dcd28c3732f8f3e39c8ca8c6a526b9e0424975acb9ee\" returns successfully" Jan 16 21:19:04.266000 audit[3113]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3113 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:19:04.266000 audit[3113]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff60f11280 a2=0 a3=7fff60f1126c items=0 ppid=3054 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:04.266000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 16 21:19:04.279000 audit[3112]: NETFILTER_CFG table=mangle:55 family=2 entries=1 op=nft_register_chain pid=3112 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:19:04.279000 audit[3112]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcd5613990 a2=0 a3=7ffcd561397c items=0 ppid=3054 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:04.279000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 16 21:19:04.297000 audit[3116]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=3116 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:19:04.297000 audit[3116]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc2b70c20 a2=0 a3=7ffdc2b70c0c items=0 ppid=3054 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:04.297000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 16 21:19:04.335141 kubelet[2889]: E0116 21:19:04.332210 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:19:04.339000 audit[3120]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=3120 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:19:04.339000 audit[3120]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc72682cc0 a2=0 a3=7ffc72682cac items=0 ppid=3054 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:04.339000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 16 21:19:04.346000 audit[3114]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3114 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:19:04.346000 audit[3114]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff0ce06bb0 a2=0 a3=7fff0ce06b9c items=0 ppid=3054 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:04.346000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 16 21:19:04.400000 audit[3121]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3121 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:19:04.400000 audit[3121]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe181d5230 a2=0 a3=7ffe181d521c items=0 ppid=3054 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:04.400000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 16 21:19:04.403000 audit[3122]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=3122 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:19:04.403000 audit[3122]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe8b483d30 a2=0 a3=7ffe8b483d1c items=0 ppid=3054 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:04.403000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 16 21:19:04.433000 audit[3124]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3124 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:19:04.433000 audit[3124]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe68d78480 a2=0 a3=7ffe68d7846c items=0 ppid=3054 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:04.433000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Jan 16 21:19:04.504085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2019445254.mount: Deactivated successfully. Jan 16 21:19:04.513000 audit[3127]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3127 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:19:04.513000 audit[3127]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffff536bba0 a2=0 a3=7ffff536bb8c items=0 ppid=3054 pid=3127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:04.513000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 16 21:19:04.528000 audit[3128]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3128 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:19:04.528000 audit[3128]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff344fe8d0 a2=0 a3=7fff344fe8bc items=0 ppid=3054 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:04.528000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 16 21:19:04.562000 audit[3130]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3130 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:19:04.562000 audit[3130]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff1e59c690 a2=0 a3=7fff1e59c67c items=0 ppid=3054 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:04.562000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 16 21:19:04.570000 audit[3131]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3131 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:19:04.570000 audit[3131]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc36a1cb60 a2=0 a3=7ffc36a1cb4c items=0 ppid=3054 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:04.570000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 16 21:19:04.610000 audit[3133]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3133 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:19:04.610000 audit[3133]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd5ba156f0 a2=0 a3=7ffd5ba156dc items=0 ppid=3054 pid=3133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:04.610000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 16 21:19:04.651000 audit[3136]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3136 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:19:04.651000 audit[3136]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffceed05490 a2=0 a3=7ffceed0547c items=0 ppid=3054 pid=3136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:04.651000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 16 21:19:04.658000 audit[3137]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3137 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:19:04.658000 audit[3137]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcf7bf6df0 a2=0 a3=7ffcf7bf6ddc items=0 ppid=3054 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:04.658000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 16 21:19:04.684000 audit[3139]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3139 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:19:04.684000 audit[3139]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe086ab800 a2=0 a3=7ffe086ab7ec items=0 ppid=3054 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:04.684000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 16 21:19:04.699000 audit[3140]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3140 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:19:04.699000 audit[3140]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffff8e702d0 a2=0 a3=7ffff8e702bc items=0 ppid=3054 pid=3140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:04.699000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 16 21:19:04.729000 audit[3142]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3142 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:19:04.729000 audit[3142]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff8c39edf0 a2=0 a3=7fff8c39eddc items=0 ppid=3054 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:04.729000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Jan 16 21:19:04.772000 audit[3145]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3145 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:19:04.772000 audit[3145]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffaa15ab00 a2=0 a3=7fffaa15aaec items=0 ppid=3054 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:04.772000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 16 21:19:04.835000 audit[3148]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3148 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:19:04.835000 audit[3148]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcbffc06e0 a2=0 a3=7ffcbffc06cc items=0 ppid=3054 pid=3148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:04.835000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 16 21:19:04.853000 audit[3149]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3149 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:19:04.853000 audit[3149]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffbc9f48d0 a2=0 a3=7fffbc9f48bc items=0 ppid=3054 pid=3149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:04.853000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 16 21:19:04.899000 audit[3151]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3151 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:19:04.899000 audit[3151]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff4495d2e0 a2=0 a3=7fff4495d2cc items=0 ppid=3054 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:04.899000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 16 21:19:04.963000 audit[3154]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3154 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:19:04.963000 audit[3154]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffda2005cc0 a2=0 a3=7ffda2005cac items=0 ppid=3054 pid=3154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:04.963000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 16 21:19:04.976000 audit[3155]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3155 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:19:04.976000 audit[3155]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd858bc540 a2=0 a3=7ffd858bc52c items=0 ppid=3054 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:04.976000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 16 21:19:05.003000 audit[3157]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3157 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:19:05.003000 audit[3157]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffcc8f85fa0 a2=0 a3=7ffcc8f85f8c items=0 ppid=3054 pid=3157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:05.003000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 16 21:19:05.175000 audit[3163]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3163 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:05.175000 audit[3163]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe773dd380 a2=0 a3=7ffe773dd36c items=0 ppid=3054 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:05.175000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:05.221000 audit[3163]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3163 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:05.221000 audit[3163]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffe773dd380 a2=0 a3=7ffe773dd36c items=0 ppid=3054 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:05.221000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:05.230000 audit[3168]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3168 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:19:05.230000 audit[3168]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff1b5dfd80 a2=0 a3=7fff1b5dfd6c items=0 ppid=3054 pid=3168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:05.230000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 16 21:19:05.257000 audit[3170]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3170 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:19:05.257000 audit[3170]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffee0bd50c0 a2=0 a3=7ffee0bd50ac items=0 ppid=3054 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:05.257000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 16 21:19:05.290000 audit[3173]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3173 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:19:05.290000 audit[3173]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe676ab870 a2=0 a3=7ffe676ab85c items=0 ppid=3054 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:05.290000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Jan 16 21:19:05.318000 audit[3174]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3174 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:19:05.318000 audit[3174]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd9a097170 a2=0 a3=7ffd9a09715c items=0 ppid=3054 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:05.318000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 16 21:19:05.343144 kubelet[2889]: E0116 21:19:05.341729 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:19:05.359000 audit[3176]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3176 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:19:05.359000 audit[3176]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffeeccc9840 a2=0 a3=7ffeeccc982c items=0 ppid=3054 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:05.359000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 16 21:19:05.396000 audit[3177]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3177 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:19:05.396000 audit[3177]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd531ddaa0 a2=0 a3=7ffd531dda8c items=0 ppid=3054 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:05.396000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 16 21:19:05.422000 audit[3179]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3179 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:19:05.422000 audit[3179]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcdc4b9260 a2=0 a3=7ffcdc4b924c items=0 ppid=3054 pid=3179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:05.422000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 16 21:19:05.475000 audit[3182]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3182 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:19:05.475000 audit[3182]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffeca3d2630 a2=0 a3=7ffeca3d261c items=0 ppid=3054 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:05.475000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 16 21:19:05.526000 audit[3183]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3183 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:19:05.526000 audit[3183]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff0705f7c0 a2=0 a3=7fff0705f7ac items=0 ppid=3054 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:05.526000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 16 21:19:05.580000 audit[3185]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3185 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:19:05.580000 audit[3185]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff90ae06b0 a2=0 a3=7fff90ae069c items=0 ppid=3054 pid=3185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:05.580000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 16 21:19:05.596000 audit[3186]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3186 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:19:05.596000 audit[3186]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe95468ac0 a2=0 a3=7ffe95468aac items=0 ppid=3054 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:05.596000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 16 21:19:05.660000 audit[3188]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3188 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:19:05.660000 audit[3188]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffee713ea60 a2=0 a3=7ffee713ea4c items=0 ppid=3054 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:05.660000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 16 21:19:05.745000 audit[3191]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3191 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:19:05.745000 audit[3191]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe2b94f730 a2=0 a3=7ffe2b94f71c items=0 ppid=3054 pid=3191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:05.745000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 16 21:19:05.837000 audit[3194]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3194 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:19:05.837000 audit[3194]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdc847be80 a2=0 a3=7ffdc847be6c items=0 ppid=3054 pid=3194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:05.837000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Jan 16 21:19:05.873000 audit[3195]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3195 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:19:05.873000 audit[3195]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcb9f27830 a2=0 a3=7ffcb9f2781c items=0 ppid=3054 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:05.873000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 16 21:19:05.908000 audit[3197]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3197 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:19:05.908000 audit[3197]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffccc876db0 a2=0 a3=7ffccc876d9c items=0 ppid=3054 pid=3197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:05.908000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 16 21:19:05.977000 audit[3200]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3200 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:19:05.977000 audit[3200]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff01ec96d0 a2=0 a3=7fff01ec96bc items=0 ppid=3054 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:05.977000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 16 21:19:06.007034 kubelet[2889]: E0116 21:19:06.005126 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:19:06.005000 audit[3201]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3201 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:19:06.005000 audit[3201]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd49b2d830 a2=0 a3=7ffd49b2d81c items=0 ppid=3054 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:06.005000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 16 21:19:06.082000 audit[3207]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3207 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:19:06.082000 audit[3207]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffcc51673f0 a2=0 a3=7ffcc51673dc items=0 ppid=3054 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:06.082000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 16 21:19:06.096000 audit[3208]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3208 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:19:06.096000 audit[3208]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf99b67a0 a2=0 a3=7ffdf99b678c items=0 ppid=3054 pid=3208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:06.096000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 16 21:19:06.141000 audit[3210]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3210 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:19:06.141000 audit[3210]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe410ed2e0 a2=0 a3=7ffe410ed2cc items=0 ppid=3054 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:06.141000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 16 21:19:06.171096 kubelet[2889]: I0116 21:19:06.171020 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cx8hg" podStartSLOduration=5.170993931 podStartE2EDuration="5.170993931s" podCreationTimestamp="2026-01-16 21:19:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 21:19:04.461636896 +0000 UTC m=+9.019939867" watchObservedRunningTime="2026-01-16 21:19:06.170993931 +0000 UTC m=+10.729297073" Jan 16 21:19:06.177000 audit[3213]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3213 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:19:06.177000 audit[3213]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd2818b620 a2=0 a3=7ffd2818b60c items=0 ppid=3054 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:06.177000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 16 21:19:06.258000 audit[3215]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3215 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 16 21:19:06.258000 audit[3215]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fff81ef5cb0 a2=0 a3=7fff81ef5c9c items=0 ppid=3054 pid=3215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:06.258000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:06.264000 audit[3215]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3215 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 16 21:19:06.264000 audit[3215]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fff81ef5cb0 a2=0 a3=7fff81ef5c9c items=0 ppid=3054 pid=3215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:06.264000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:06.348552 kubelet[2889]: E0116 21:19:06.347250 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:19:06.720576 kubelet[2889]: E0116 21:19:06.720485 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:19:07.352205 kubelet[2889]: E0116 21:19:07.351695 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:19:12.404543 containerd[1592]: time="2026-01-16T21:19:12.400473275Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:19:12.425050 containerd[1592]: time="2026-01-16T21:19:12.424012847Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 16 21:19:12.431456 containerd[1592]: time="2026-01-16T21:19:12.431217021Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:19:12.441561 containerd[1592]: time="2026-01-16T21:19:12.441521622Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:19:12.443434 containerd[1592]: time="2026-01-16T21:19:12.441968235Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 9.668042411s" Jan 16 21:19:12.443434 containerd[1592]: time="2026-01-16T21:19:12.442090061Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 16 21:19:12.484804 containerd[1592]: time="2026-01-16T21:19:12.484075044Z" level=info msg="CreateContainer within sandbox \"af23b6414fffa27b1d181bc5ab66148a7ad6cec6e057b45661e0dee4f546dc4d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 16 21:19:12.610533 containerd[1592]: time="2026-01-16T21:19:12.610093361Z" level=info msg="Container 8e0c462898bed64b529b04f278dd2a035170639128a1c8a73460f6aaba227e9f: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:19:12.729122 containerd[1592]: time="2026-01-16T21:19:12.728563451Z" level=info msg="CreateContainer within sandbox \"af23b6414fffa27b1d181bc5ab66148a7ad6cec6e057b45661e0dee4f546dc4d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8e0c462898bed64b529b04f278dd2a035170639128a1c8a73460f6aaba227e9f\"" Jan 16 21:19:12.735558 containerd[1592]: time="2026-01-16T21:19:12.734951698Z" level=info msg="StartContainer for \"8e0c462898bed64b529b04f278dd2a035170639128a1c8a73460f6aaba227e9f\"" Jan 16 21:19:12.741621 containerd[1592]: time="2026-01-16T21:19:12.737231545Z" level=info msg="connecting to shim 8e0c462898bed64b529b04f278dd2a035170639128a1c8a73460f6aaba227e9f" address="unix:///run/containerd/s/ad8ca06b5cd9b10e7a2a775f5f9c5ec276ae4b5fc6238c28e34f5c4af9762b8d" protocol=ttrpc version=3 Jan 16 21:19:12.918251 systemd[1]: Started cri-containerd-8e0c462898bed64b529b04f278dd2a035170639128a1c8a73460f6aaba227e9f.scope - libcontainer container 8e0c462898bed64b529b04f278dd2a035170639128a1c8a73460f6aaba227e9f. Jan 16 21:19:13.052910 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 16 21:19:13.071556 kernel: audit: type=1334 audit(1768598353.041:505): prog-id=146 op=LOAD Jan 16 21:19:13.041000 audit: BPF prog-id=146 op=LOAD Jan 16 21:19:13.085599 kernel: audit: type=1334 audit(1768598353.043:506): prog-id=147 op=LOAD Jan 16 21:19:13.043000 audit: BPF prog-id=147 op=LOAD Jan 16 21:19:13.043000 audit[3216]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000112238 a2=98 a3=0 items=0 ppid=2979 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:13.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865306334363238393862656436346235323962303466323738646432 Jan 16 21:19:13.203961 kernel: audit: type=1300 audit(1768598353.043:506): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000112238 a2=98 a3=0 items=0 ppid=2979 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:13.205209 kernel: audit: type=1327 audit(1768598353.043:506): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865306334363238393862656436346235323962303466323738646432 Jan 16 21:19:13.224594 kernel: audit: type=1334 audit(1768598353.043:507): prog-id=147 op=UNLOAD Jan 16 21:19:13.043000 audit: BPF prog-id=147 op=UNLOAD Jan 16 21:19:13.043000 audit[3216]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2979 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:13.279925 kernel: audit: type=1300 audit(1768598353.043:507): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2979 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:13.338589 kernel: audit: type=1327 audit(1768598353.043:507): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865306334363238393862656436346235323962303466323738646432 Jan 16 21:19:13.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865306334363238393862656436346235323962303466323738646432 Jan 16 21:19:13.373447 kernel: audit: type=1334 audit(1768598353.043:508): prog-id=148 op=LOAD Jan 16 21:19:13.043000 audit: BPF prog-id=148 op=LOAD Jan 16 21:19:13.043000 audit[3216]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000112488 a2=98 a3=0 items=0 ppid=2979 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:13.431858 kernel: audit: type=1300 audit(1768598353.043:508): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000112488 a2=98 a3=0 items=0 ppid=2979 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:13.432666 kernel: audit: type=1327 audit(1768598353.043:508): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865306334363238393862656436346235323962303466323738646432 Jan 16 21:19:13.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865306334363238393862656436346235323962303466323738646432 Jan 16 21:19:13.043000 audit: BPF prog-id=149 op=LOAD Jan 16 21:19:13.043000 audit[3216]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000112218 a2=98 a3=0 items=0 ppid=2979 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:13.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865306334363238393862656436346235323962303466323738646432 Jan 16 21:19:13.043000 audit: BPF prog-id=149 op=UNLOAD Jan 16 21:19:13.043000 audit[3216]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2979 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:13.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865306334363238393862656436346235323962303466323738646432 Jan 16 21:19:13.043000 audit: BPF prog-id=148 op=UNLOAD Jan 16 21:19:13.043000 audit[3216]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2979 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:13.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865306334363238393862656436346235323962303466323738646432 Jan 16 21:19:13.043000 audit: BPF prog-id=150 op=LOAD Jan 16 21:19:13.043000 audit[3216]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001126e8 a2=98 a3=0 items=0 ppid=2979 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:13.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865306334363238393862656436346235323962303466323738646432 Jan 16 21:19:13.648812 containerd[1592]: time="2026-01-16T21:19:13.647545864Z" level=info msg="StartContainer for \"8e0c462898bed64b529b04f278dd2a035170639128a1c8a73460f6aaba227e9f\" returns successfully" Jan 16 21:19:14.876892 kubelet[2889]: I0116 21:19:14.875899 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-vd77m" podStartSLOduration=4.198740193 podStartE2EDuration="13.875872585s" podCreationTimestamp="2026-01-16 21:19:01 +0000 UTC" firstStartedPulling="2026-01-16 21:19:02.771464545 +0000 UTC m=+7.329767506" lastFinishedPulling="2026-01-16 21:19:12.448596936 +0000 UTC m=+17.006899898" observedRunningTime="2026-01-16 21:19:14.8705533 +0000 UTC m=+19.428856481" watchObservedRunningTime="2026-01-16 21:19:14.875872585 +0000 UTC m=+19.434175556" Jan 16 21:19:33.669974 sudo[1815]: pam_unix(sudo:session): session closed for user root Jan 16 21:19:33.669000 audit[1815]: USER_END pid=1815 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:19:33.696781 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 16 21:19:33.696945 kernel: audit: type=1106 audit(1768598373.669:513): pid=1815 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:19:33.673000 audit[1815]: CRED_DISP pid=1815 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:19:33.782700 kernel: audit: type=1104 audit(1768598373.673:514): pid=1815 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:19:33.782803 sshd[1814]: Connection closed by 10.0.0.1 port 40760 Jan 16 21:19:33.781234 sshd-session[1810]: pam_unix(sshd:session): session closed for user core Jan 16 21:19:33.795000 audit[1810]: USER_END pid=1810 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:19:33.816527 systemd[1]: sshd@6-10.0.0.34:22-10.0.0.1:40760.service: Deactivated successfully. Jan 16 21:19:33.869167 kernel: audit: type=1106 audit(1768598373.795:515): pid=1810 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:19:33.874741 systemd[1]: session-8.scope: Deactivated successfully. Jan 16 21:19:33.875786 systemd[1]: session-8.scope: Consumed 14.138s CPU time, 223.4M memory peak. Jan 16 21:19:33.800000 audit[1810]: CRED_DISP pid=1810 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:19:33.896166 systemd-logind[1570]: Session 8 logged out. Waiting for processes to exit. Jan 16 21:19:33.908212 systemd-logind[1570]: Removed session 8. Jan 16 21:19:33.965520 kernel: audit: type=1104 audit(1768598373.800:516): pid=1810 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:19:33.969218 kernel: audit: type=1131 audit(1768598373.819:517): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.34:22-10.0.0.1:40760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:33.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.34:22-10.0.0.1:40760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:35.168000 audit[3311]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3311 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:35.200465 kernel: audit: type=1325 audit(1768598375.168:518): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3311 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:35.168000 audit[3311]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffedb4ab3d0 a2=0 a3=7ffedb4ab3bc items=0 ppid=3054 pid=3311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:35.264023 kernel: audit: type=1300 audit(1768598375.168:518): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffedb4ab3d0 a2=0 a3=7ffedb4ab3bc items=0 ppid=3054 pid=3311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:35.168000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:35.289857 kernel: audit: type=1327 audit(1768598375.168:518): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:35.212000 audit[3311]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3311 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:35.316709 kernel: audit: type=1325 audit(1768598375.212:519): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3311 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:35.212000 audit[3311]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffedb4ab3d0 a2=0 a3=0 items=0 ppid=3054 pid=3311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:35.212000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:35.396506 kernel: audit: type=1300 audit(1768598375.212:519): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffedb4ab3d0 a2=0 a3=0 items=0 ppid=3054 pid=3311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:36.415000 audit[3313]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3313 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:36.415000 audit[3313]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdbbcd1590 a2=0 a3=7ffdbbcd157c items=0 ppid=3054 pid=3313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:36.415000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:36.431000 audit[3313]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3313 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:36.431000 audit[3313]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdbbcd1590 a2=0 a3=0 items=0 ppid=3054 pid=3313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:36.431000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:46.145701 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 16 21:19:46.145900 kernel: audit: type=1325 audit(1768598386.109:522): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3315 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:46.109000 audit[3315]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3315 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:46.109000 audit[3315]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc421809f0 a2=0 a3=7ffc421809dc items=0 ppid=3054 pid=3315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:46.246731 kernel: audit: type=1300 audit(1768598386.109:522): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc421809f0 a2=0 a3=7ffc421809dc items=0 ppid=3054 pid=3315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:46.246874 kernel: audit: type=1327 audit(1768598386.109:522): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:46.109000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:46.264000 audit[3315]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3315 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:46.292537 kernel: audit: type=1325 audit(1768598386.264:523): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3315 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:46.264000 audit[3315]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc421809f0 a2=0 a3=0 items=0 ppid=3054 pid=3315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:46.423228 kernel: audit: type=1300 audit(1768598386.264:523): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc421809f0 a2=0 a3=0 items=0 ppid=3054 pid=3315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:46.423539 kernel: audit: type=1327 audit(1768598386.264:523): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:46.264000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:47.579000 audit[3317]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3317 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:47.629036 kernel: audit: type=1325 audit(1768598387.579:524): table=filter:111 family=2 entries=19 op=nft_register_rule pid=3317 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:47.579000 audit[3317]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd9fe904e0 a2=0 a3=7ffd9fe904cc items=0 ppid=3054 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:47.692530 kernel: audit: type=1300 audit(1768598387.579:524): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd9fe904e0 a2=0 a3=7ffd9fe904cc items=0 ppid=3054 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:47.579000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:47.714804 kernel: audit: type=1327 audit(1768598387.579:524): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:47.714908 kernel: audit: type=1325 audit(1768598387.636:525): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3317 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:47.636000 audit[3317]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3317 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:47.636000 audit[3317]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd9fe904e0 a2=0 a3=0 items=0 ppid=3054 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:47.636000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:52.437000 audit[3319]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3319 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:52.456845 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 16 21:19:52.456977 kernel: audit: type=1325 audit(1768598392.437:526): table=filter:113 family=2 entries=21 op=nft_register_rule pid=3319 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:52.437000 audit[3319]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffed10eb770 a2=0 a3=7ffed10eb75c items=0 ppid=3054 pid=3319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:52.530782 kernel: audit: type=1300 audit(1768598392.437:526): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffed10eb770 a2=0 a3=7ffed10eb75c items=0 ppid=3054 pid=3319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:52.552853 kernel: audit: type=1327 audit(1768598392.437:526): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:52.437000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:52.553000 audit[3319]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3319 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:52.553000 audit[3319]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffed10eb770 a2=0 a3=0 items=0 ppid=3054 pid=3319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:52.647532 kernel: audit: type=1325 audit(1768598392.553:527): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3319 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:52.647769 kernel: audit: type=1300 audit(1768598392.553:527): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffed10eb770 a2=0 a3=0 items=0 ppid=3054 pid=3319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:52.553000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:52.703423 kernel: audit: type=1327 audit(1768598392.553:527): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:52.778767 systemd[1]: Created slice kubepods-besteffort-pode6ad2071_4474_4750_82bb_9aae8f539dd2.slice - libcontainer container kubepods-besteffort-pode6ad2071_4474_4750_82bb_9aae8f539dd2.slice. Jan 16 21:19:52.885905 kubelet[2889]: I0116 21:19:52.885055 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2csmj\" (UniqueName: \"kubernetes.io/projected/e6ad2071-4474-4750-82bb-9aae8f539dd2-kube-api-access-2csmj\") pod \"calico-typha-69d7bbfd7b-d4gjx\" (UID: \"e6ad2071-4474-4750-82bb-9aae8f539dd2\") " pod="calico-system/calico-typha-69d7bbfd7b-d4gjx" Jan 16 21:19:52.885905 kubelet[2889]: I0116 21:19:52.885819 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e6ad2071-4474-4750-82bb-9aae8f539dd2-typha-certs\") pod \"calico-typha-69d7bbfd7b-d4gjx\" (UID: \"e6ad2071-4474-4750-82bb-9aae8f539dd2\") " pod="calico-system/calico-typha-69d7bbfd7b-d4gjx" Jan 16 21:19:52.885905 kubelet[2889]: I0116 21:19:52.885857 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6ad2071-4474-4750-82bb-9aae8f539dd2-tigera-ca-bundle\") pod \"calico-typha-69d7bbfd7b-d4gjx\" (UID: \"e6ad2071-4474-4750-82bb-9aae8f539dd2\") " pod="calico-system/calico-typha-69d7bbfd7b-d4gjx" Jan 16 21:19:53.117114 kubelet[2889]: E0116 21:19:53.109096 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:19:53.140484 containerd[1592]: time="2026-01-16T21:19:53.137015716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69d7bbfd7b-d4gjx,Uid:e6ad2071-4474-4750-82bb-9aae8f539dd2,Namespace:calico-system,Attempt:0,}" Jan 16 21:19:53.317698 systemd[1]: Created slice kubepods-besteffort-pod21b4afca_27f1_4cd7_a28c_a4614d8d099b.slice - libcontainer container kubepods-besteffort-pod21b4afca_27f1_4cd7_a28c_a4614d8d099b.slice. Jan 16 21:19:53.345175 containerd[1592]: time="2026-01-16T21:19:53.342503934Z" level=info msg="connecting to shim d8ddae9646ee33d3b3f2304300d16d1832ddd2cdb92ca5f5895458d97319b901" address="unix:///run/containerd/s/5672c3c0cb8791d3f62cdba55308c21a98679adaec53257a237501b400c46281" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:19:53.408134 kubelet[2889]: I0116 21:19:53.407900 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/21b4afca-27f1-4cd7-a28c-a4614d8d099b-flexvol-driver-host\") pod \"calico-node-dmn8q\" (UID: \"21b4afca-27f1-4cd7-a28c-a4614d8d099b\") " pod="calico-system/calico-node-dmn8q" Jan 16 21:19:53.408134 kubelet[2889]: I0116 21:19:53.407957 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/21b4afca-27f1-4cd7-a28c-a4614d8d099b-cni-log-dir\") pod \"calico-node-dmn8q\" (UID: \"21b4afca-27f1-4cd7-a28c-a4614d8d099b\") " pod="calico-system/calico-node-dmn8q" Jan 16 21:19:53.408134 kubelet[2889]: I0116 21:19:53.407986 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/21b4afca-27f1-4cd7-a28c-a4614d8d099b-policysync\") pod \"calico-node-dmn8q\" (UID: \"21b4afca-27f1-4cd7-a28c-a4614d8d099b\") " pod="calico-system/calico-node-dmn8q" Jan 16 21:19:53.408134 kubelet[2889]: I0116 21:19:53.408011 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/21b4afca-27f1-4cd7-a28c-a4614d8d099b-var-run-calico\") pod \"calico-node-dmn8q\" (UID: \"21b4afca-27f1-4cd7-a28c-a4614d8d099b\") " pod="calico-system/calico-node-dmn8q" Jan 16 21:19:53.408134 kubelet[2889]: I0116 21:19:53.408036 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21b4afca-27f1-4cd7-a28c-a4614d8d099b-xtables-lock\") pod \"calico-node-dmn8q\" (UID: \"21b4afca-27f1-4cd7-a28c-a4614d8d099b\") " pod="calico-system/calico-node-dmn8q" Jan 16 21:19:53.408775 kubelet[2889]: I0116 21:19:53.408057 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slr67\" (UniqueName: \"kubernetes.io/projected/21b4afca-27f1-4cd7-a28c-a4614d8d099b-kube-api-access-slr67\") pod \"calico-node-dmn8q\" (UID: \"21b4afca-27f1-4cd7-a28c-a4614d8d099b\") " pod="calico-system/calico-node-dmn8q" Jan 16 21:19:53.408775 kubelet[2889]: I0116 21:19:53.408086 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21b4afca-27f1-4cd7-a28c-a4614d8d099b-tigera-ca-bundle\") pod \"calico-node-dmn8q\" (UID: \"21b4afca-27f1-4cd7-a28c-a4614d8d099b\") " pod="calico-system/calico-node-dmn8q" Jan 16 21:19:53.408775 kubelet[2889]: I0116 21:19:53.408112 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/21b4afca-27f1-4cd7-a28c-a4614d8d099b-cni-net-dir\") pod \"calico-node-dmn8q\" (UID: \"21b4afca-27f1-4cd7-a28c-a4614d8d099b\") " pod="calico-system/calico-node-dmn8q" Jan 16 21:19:53.408775 kubelet[2889]: I0116 21:19:53.408136 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/21b4afca-27f1-4cd7-a28c-a4614d8d099b-node-certs\") pod \"calico-node-dmn8q\" (UID: \"21b4afca-27f1-4cd7-a28c-a4614d8d099b\") " pod="calico-system/calico-node-dmn8q" Jan 16 21:19:53.408775 kubelet[2889]: I0116 21:19:53.408167 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/21b4afca-27f1-4cd7-a28c-a4614d8d099b-cni-bin-dir\") pod \"calico-node-dmn8q\" (UID: \"21b4afca-27f1-4cd7-a28c-a4614d8d099b\") " pod="calico-system/calico-node-dmn8q" Jan 16 21:19:53.408946 kubelet[2889]: I0116 21:19:53.408189 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21b4afca-27f1-4cd7-a28c-a4614d8d099b-lib-modules\") pod \"calico-node-dmn8q\" (UID: \"21b4afca-27f1-4cd7-a28c-a4614d8d099b\") " pod="calico-system/calico-node-dmn8q" Jan 16 21:19:53.408946 kubelet[2889]: I0116 21:19:53.408211 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/21b4afca-27f1-4cd7-a28c-a4614d8d099b-var-lib-calico\") pod \"calico-node-dmn8q\" (UID: \"21b4afca-27f1-4cd7-a28c-a4614d8d099b\") " pod="calico-system/calico-node-dmn8q" Jan 16 21:19:53.570439 kubelet[2889]: E0116 21:19:53.568940 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.570439 kubelet[2889]: W0116 21:19:53.568978 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.570439 kubelet[2889]: E0116 21:19:53.569683 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.571738 kubelet[2889]: E0116 21:19:53.570994 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.571738 kubelet[2889]: W0116 21:19:53.571021 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.571738 kubelet[2889]: E0116 21:19:53.571044 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.573250 kubelet[2889]: E0116 21:19:53.572745 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.573250 kubelet[2889]: W0116 21:19:53.572864 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.573250 kubelet[2889]: E0116 21:19:53.572883 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.573250 kubelet[2889]: E0116 21:19:53.573187 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.573250 kubelet[2889]: W0116 21:19:53.573198 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.573250 kubelet[2889]: E0116 21:19:53.573213 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.617244 kubelet[2889]: E0116 21:19:53.617092 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.617244 kubelet[2889]: W0116 21:19:53.617208 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.617244 kubelet[2889]: E0116 21:19:53.617236 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.652813 systemd[1]: Started cri-containerd-d8ddae9646ee33d3b3f2304300d16d1832ddd2cdb92ca5f5895458d97319b901.scope - libcontainer container d8ddae9646ee33d3b3f2304300d16d1832ddd2cdb92ca5f5895458d97319b901. Jan 16 21:19:53.657989 kubelet[2889]: E0116 21:19:53.653233 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.657989 kubelet[2889]: W0116 21:19:53.657790 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.657989 kubelet[2889]: E0116 21:19:53.657822 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.657989 kubelet[2889]: E0116 21:19:53.654940 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:19:53.684178 kubelet[2889]: E0116 21:19:53.683866 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:19:53.691948 containerd[1592]: time="2026-01-16T21:19:53.691905594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dmn8q,Uid:21b4afca-27f1-4cd7-a28c-a4614d8d099b,Namespace:calico-system,Attempt:0,}" Jan 16 21:19:53.744233 kubelet[2889]: E0116 21:19:53.742730 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.744233 kubelet[2889]: W0116 21:19:53.742758 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.744233 kubelet[2889]: E0116 21:19:53.742786 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.748028 kubelet[2889]: E0116 21:19:53.747004 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.748028 kubelet[2889]: W0116 21:19:53.747868 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.748028 kubelet[2889]: E0116 21:19:53.747895 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.752168 kubelet[2889]: E0116 21:19:53.751496 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.752168 kubelet[2889]: W0116 21:19:53.751703 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.752168 kubelet[2889]: E0116 21:19:53.751726 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.756997 kubelet[2889]: E0116 21:19:53.756530 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.756997 kubelet[2889]: W0116 21:19:53.756663 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.756997 kubelet[2889]: E0116 21:19:53.756686 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.764124 kubelet[2889]: E0116 21:19:53.764014 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.764124 kubelet[2889]: W0116 21:19:53.764031 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.764124 kubelet[2889]: E0116 21:19:53.764050 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.765909 kubelet[2889]: E0116 21:19:53.765779 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.765909 kubelet[2889]: W0116 21:19:53.765797 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.765909 kubelet[2889]: E0116 21:19:53.765813 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.768190 kubelet[2889]: E0116 21:19:53.766742 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.768190 kubelet[2889]: W0116 21:19:53.766759 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.768190 kubelet[2889]: E0116 21:19:53.766775 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.772136 kubelet[2889]: E0116 21:19:53.771976 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.772136 kubelet[2889]: W0116 21:19:53.771993 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.772136 kubelet[2889]: E0116 21:19:53.772011 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.779428 kubelet[2889]: E0116 21:19:53.778711 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.779428 kubelet[2889]: W0116 21:19:53.778817 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.779428 kubelet[2889]: E0116 21:19:53.778833 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.785226 kubelet[2889]: E0116 21:19:53.782211 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.785657 kubelet[2889]: W0116 21:19:53.785514 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.785657 kubelet[2889]: E0116 21:19:53.785534 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.787918 kubelet[2889]: E0116 21:19:53.787166 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.789440 kubelet[2889]: W0116 21:19:53.788533 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.789440 kubelet[2889]: E0116 21:19:53.788760 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.793163 kubelet[2889]: E0116 21:19:53.792724 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.793163 kubelet[2889]: W0116 21:19:53.792830 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.793163 kubelet[2889]: E0116 21:19:53.792850 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.795901 kubelet[2889]: E0116 21:19:53.794220 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.795901 kubelet[2889]: W0116 21:19:53.794738 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.795901 kubelet[2889]: E0116 21:19:53.794755 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.804871 kubelet[2889]: E0116 21:19:53.802926 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.804871 kubelet[2889]: W0116 21:19:53.803039 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.804871 kubelet[2889]: E0116 21:19:53.803061 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.807492 kubelet[2889]: E0116 21:19:53.805786 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.807492 kubelet[2889]: W0116 21:19:53.805803 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.807492 kubelet[2889]: E0116 21:19:53.805819 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.807492 kubelet[2889]: E0116 21:19:53.806067 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.807492 kubelet[2889]: W0116 21:19:53.806080 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.807492 kubelet[2889]: E0116 21:19:53.806094 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.807492 kubelet[2889]: E0116 21:19:53.806801 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.807492 kubelet[2889]: W0116 21:19:53.806813 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.807492 kubelet[2889]: E0116 21:19:53.806826 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.815668 kubelet[2889]: E0116 21:19:53.814499 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.817098 kubelet[2889]: W0116 21:19:53.815973 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.817098 kubelet[2889]: E0116 21:19:53.816001 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.826895 kubelet[2889]: E0116 21:19:53.826660 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.826895 kubelet[2889]: W0116 21:19:53.826790 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.826895 kubelet[2889]: E0116 21:19:53.826814 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.828198 kubelet[2889]: E0116 21:19:53.828086 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.828198 kubelet[2889]: W0116 21:19:53.828097 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.828198 kubelet[2889]: E0116 21:19:53.828110 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.840460 kubelet[2889]: E0116 21:19:53.838906 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.840460 kubelet[2889]: W0116 21:19:53.838931 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.840460 kubelet[2889]: E0116 21:19:53.838951 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.840460 kubelet[2889]: I0116 21:19:53.838988 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b99000d7-a136-4299-82d0-76fa7e3c28f2-registration-dir\") pod \"csi-node-driver-6hngd\" (UID: \"b99000d7-a136-4299-82d0-76fa7e3c28f2\") " pod="calico-system/csi-node-driver-6hngd" Jan 16 21:19:53.841000 kubelet[2889]: E0116 21:19:53.840979 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.841099 kubelet[2889]: W0116 21:19:53.841080 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.841932 kubelet[2889]: E0116 21:19:53.841172 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.842051 kubelet[2889]: I0116 21:19:53.842028 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ls54\" (UniqueName: \"kubernetes.io/projected/b99000d7-a136-4299-82d0-76fa7e3c28f2-kube-api-access-6ls54\") pod \"csi-node-driver-6hngd\" (UID: \"b99000d7-a136-4299-82d0-76fa7e3c28f2\") " pod="calico-system/csi-node-driver-6hngd" Jan 16 21:19:53.849828 kubelet[2889]: E0116 21:19:53.849012 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.849828 kubelet[2889]: W0116 21:19:53.849044 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.849828 kubelet[2889]: E0116 21:19:53.849075 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.849828 kubelet[2889]: I0116 21:19:53.849110 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b99000d7-a136-4299-82d0-76fa7e3c28f2-socket-dir\") pod \"csi-node-driver-6hngd\" (UID: \"b99000d7-a136-4299-82d0-76fa7e3c28f2\") " pod="calico-system/csi-node-driver-6hngd" Jan 16 21:19:53.857714 kubelet[2889]: E0116 21:19:53.857685 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.858032 kubelet[2889]: W0116 21:19:53.858007 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.858173 kubelet[2889]: E0116 21:19:53.858154 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.862860 kubelet[2889]: I0116 21:19:53.862764 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b99000d7-a136-4299-82d0-76fa7e3c28f2-varrun\") pod \"csi-node-driver-6hngd\" (UID: \"b99000d7-a136-4299-82d0-76fa7e3c28f2\") " pod="calico-system/csi-node-driver-6hngd" Jan 16 21:19:53.863474 kubelet[2889]: E0116 21:19:53.863456 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.863700 kubelet[2889]: W0116 21:19:53.863677 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.863818 kubelet[2889]: E0116 21:19:53.863802 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.872466 kubelet[2889]: E0116 21:19:53.871881 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.872466 kubelet[2889]: W0116 21:19:53.871900 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.872466 kubelet[2889]: E0116 21:19:53.871922 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.874478 kubelet[2889]: E0116 21:19:53.873111 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.874478 kubelet[2889]: W0116 21:19:53.873127 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.874478 kubelet[2889]: E0116 21:19:53.873144 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.877031 kubelet[2889]: E0116 21:19:53.876496 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.877031 kubelet[2889]: W0116 21:19:53.876512 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.877031 kubelet[2889]: E0116 21:19:53.876529 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.880807 kubelet[2889]: E0116 21:19:53.880508 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.880807 kubelet[2889]: W0116 21:19:53.880734 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.880807 kubelet[2889]: E0116 21:19:53.880768 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.881942 kubelet[2889]: I0116 21:19:53.881715 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b99000d7-a136-4299-82d0-76fa7e3c28f2-kubelet-dir\") pod \"csi-node-driver-6hngd\" (UID: \"b99000d7-a136-4299-82d0-76fa7e3c28f2\") " pod="calico-system/csi-node-driver-6hngd" Jan 16 21:19:53.882896 kubelet[2889]: E0116 21:19:53.882026 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.882896 kubelet[2889]: W0116 21:19:53.882813 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.882896 kubelet[2889]: E0116 21:19:53.882833 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.886472 kubelet[2889]: E0116 21:19:53.886167 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.886472 kubelet[2889]: W0116 21:19:53.886191 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.886472 kubelet[2889]: E0116 21:19:53.886212 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.889792 kubelet[2889]: E0116 21:19:53.889651 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.889792 kubelet[2889]: W0116 21:19:53.889677 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.889792 kubelet[2889]: E0116 21:19:53.889700 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.895910 kubelet[2889]: E0116 21:19:53.895732 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.895910 kubelet[2889]: W0116 21:19:53.895752 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.895910 kubelet[2889]: E0116 21:19:53.895776 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.897839 kubelet[2889]: E0116 21:19:53.897819 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.897937 kubelet[2889]: W0116 21:19:53.897917 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.898037 kubelet[2889]: E0116 21:19:53.898016 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.899724 kubelet[2889]: E0116 21:19:53.899708 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:53.899814 kubelet[2889]: W0116 21:19:53.899799 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:53.899891 kubelet[2889]: E0116 21:19:53.899877 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.945782 kernel: audit: type=1325 audit(1768598393.920:528): table=filter:115 family=2 entries=22 op=nft_register_rule pid=3417 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:53.920000 audit[3417]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=3417 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:54.010959 kernel: audit: type=1300 audit(1768598393.920:528): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffcd614c090 a2=0 a3=7ffcd614c07c items=0 ppid=3054 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:53.920000 audit[3417]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffcd614c090 a2=0 a3=7ffcd614c07c items=0 ppid=3054 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:54.011246 containerd[1592]: time="2026-01-16T21:19:54.009494415Z" level=info msg="connecting to shim 93dfabfdf599f3d361287516af3964d525dc81d71338d6dfc4d20254c4f10092" address="unix:///run/containerd/s/845e5825ed337eaa9272fdac0a609c1128408f580483388cca82c41ba5e13712" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:19:54.034514 kernel: audit: type=1327 audit(1768598393.920:528): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:53.920000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:54.034809 kubelet[2889]: E0116 21:19:54.014820 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:54.034809 kubelet[2889]: W0116 21:19:54.014842 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:54.034809 kubelet[2889]: E0116 21:19:54.014869 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:54.034809 kubelet[2889]: E0116 21:19:54.019654 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:54.034809 kubelet[2889]: W0116 21:19:54.019671 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:54.034809 kubelet[2889]: E0116 21:19:54.019698 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:54.034809 kubelet[2889]: E0116 21:19:54.023172 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:54.034809 kubelet[2889]: W0116 21:19:54.023186 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:54.034809 kubelet[2889]: E0116 21:19:54.023204 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:54.034809 kubelet[2889]: E0116 21:19:54.027753 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:54.035141 kubelet[2889]: W0116 21:19:54.027769 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:54.035141 kubelet[2889]: E0116 21:19:54.027788 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.959000 audit[3417]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3417 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:54.069775 kubelet[2889]: E0116 21:19:54.036034 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:54.069775 kubelet[2889]: W0116 21:19:54.036070 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:54.069775 kubelet[2889]: E0116 21:19:54.036098 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:54.069775 kubelet[2889]: E0116 21:19:54.038744 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:54.069775 kubelet[2889]: W0116 21:19:54.038761 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:54.069775 kubelet[2889]: E0116 21:19:54.038783 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:54.069775 kubelet[2889]: E0116 21:19:54.047865 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:54.069775 kubelet[2889]: W0116 21:19:54.047890 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:54.069775 kubelet[2889]: E0116 21:19:54.047915 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:54.069775 kubelet[2889]: E0116 21:19:54.057776 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:54.070853 kernel: audit: type=1325 audit(1768598393.959:529): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3417 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:19:54.073217 kubelet[2889]: W0116 21:19:54.057800 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:54.073217 kubelet[2889]: E0116 21:19:54.057823 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:54.073217 kubelet[2889]: E0116 21:19:54.061242 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:54.073217 kubelet[2889]: W0116 21:19:54.063035 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:54.073217 kubelet[2889]: E0116 21:19:54.063068 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:54.073217 kubelet[2889]: E0116 21:19:54.067653 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:54.073217 kubelet[2889]: W0116 21:19:54.067680 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:54.073217 kubelet[2889]: E0116 21:19:54.067707 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:54.073217 kubelet[2889]: E0116 21:19:54.070143 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:54.073217 kubelet[2889]: W0116 21:19:54.070158 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:54.075954 kubelet[2889]: E0116 21:19:54.070179 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:53.959000 audit[3417]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcd614c090 a2=0 a3=0 items=0 ppid=3054 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:53.959000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:19:54.003000 audit: BPF prog-id=151 op=LOAD Jan 16 21:19:54.007000 audit: BPF prog-id=152 op=LOAD Jan 16 21:19:54.007000 audit[3341]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3330 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:54.007000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438646461653936343665653333643362336632333034333030643136 Jan 16 21:19:54.007000 audit: BPF prog-id=152 op=UNLOAD Jan 16 21:19:54.007000 audit[3341]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3330 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:54.007000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438646461653936343665653333643362336632333034333030643136 Jan 16 21:19:54.009000 audit: BPF prog-id=153 op=LOAD Jan 16 21:19:54.009000 audit[3341]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3330 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:54.009000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438646461653936343665653333643362336632333034333030643136 Jan 16 21:19:54.009000 audit: BPF prog-id=154 op=LOAD Jan 16 21:19:54.009000 audit[3341]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3330 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:54.009000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438646461653936343665653333643362336632333034333030643136 Jan 16 21:19:54.009000 audit: BPF prog-id=154 op=UNLOAD Jan 16 21:19:54.009000 audit[3341]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3330 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:54.009000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438646461653936343665653333643362336632333034333030643136 Jan 16 21:19:54.009000 audit: BPF prog-id=153 op=UNLOAD Jan 16 21:19:54.009000 audit[3341]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3330 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:54.009000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438646461653936343665653333643362336632333034333030643136 Jan 16 21:19:54.009000 audit: BPF prog-id=155 op=LOAD Jan 16 21:19:54.009000 audit[3341]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3330 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:54.009000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438646461653936343665653333643362336632333034333030643136 Jan 16 21:19:54.101025 kubelet[2889]: E0116 21:19:54.100688 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:54.101025 kubelet[2889]: W0116 21:19:54.100807 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:54.101025 kubelet[2889]: E0116 21:19:54.100831 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:54.103440 kubelet[2889]: E0116 21:19:54.103020 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:54.103440 kubelet[2889]: W0116 21:19:54.103144 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:54.103440 kubelet[2889]: E0116 21:19:54.103171 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:54.108098 kubelet[2889]: E0116 21:19:54.107903 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:54.108098 kubelet[2889]: W0116 21:19:54.108021 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:54.108098 kubelet[2889]: E0116 21:19:54.108043 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:54.115506 kubelet[2889]: E0116 21:19:54.112927 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:54.115506 kubelet[2889]: W0116 21:19:54.113035 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:54.115506 kubelet[2889]: E0116 21:19:54.113059 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:54.117088 kubelet[2889]: E0116 21:19:54.116771 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:54.117088 kubelet[2889]: W0116 21:19:54.116883 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:54.117088 kubelet[2889]: E0116 21:19:54.116899 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:54.120214 kubelet[2889]: E0116 21:19:54.119889 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:54.120214 kubelet[2889]: W0116 21:19:54.119904 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:54.120214 kubelet[2889]: E0116 21:19:54.119919 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:54.123109 kubelet[2889]: E0116 21:19:54.122799 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:54.123109 kubelet[2889]: W0116 21:19:54.122913 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:54.123109 kubelet[2889]: E0116 21:19:54.122930 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:54.126388 kubelet[2889]: E0116 21:19:54.125487 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:54.126388 kubelet[2889]: W0116 21:19:54.125720 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:54.126388 kubelet[2889]: E0116 21:19:54.125737 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:54.131217 kubelet[2889]: E0116 21:19:54.130940 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:54.131217 kubelet[2889]: W0116 21:19:54.130966 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:54.131217 kubelet[2889]: E0116 21:19:54.130994 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:54.132227 kubelet[2889]: E0116 21:19:54.132204 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:54.132671 kubelet[2889]: W0116 21:19:54.132525 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:54.135854 kubelet[2889]: E0116 21:19:54.135695 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:54.138981 kubelet[2889]: E0116 21:19:54.138963 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:54.145647 kubelet[2889]: W0116 21:19:54.142858 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:54.145647 kubelet[2889]: E0116 21:19:54.142891 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:54.145849 kubelet[2889]: E0116 21:19:54.145834 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:54.145923 kubelet[2889]: W0116 21:19:54.145908 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:54.145995 kubelet[2889]: E0116 21:19:54.145981 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:54.146738 kubelet[2889]: E0116 21:19:54.146721 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:54.151045 kubelet[2889]: W0116 21:19:54.150766 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:54.151045 kubelet[2889]: E0116 21:19:54.150793 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:54.151158 kubelet[2889]: E0116 21:19:54.151115 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:54.151158 kubelet[2889]: W0116 21:19:54.151126 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:54.151158 kubelet[2889]: E0116 21:19:54.151137 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:54.226225 kubelet[2889]: E0116 21:19:54.222767 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:19:54.226225 kubelet[2889]: W0116 21:19:54.222791 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:19:54.226225 kubelet[2889]: E0116 21:19:54.222813 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:19:54.353948 systemd[1]: Started cri-containerd-93dfabfdf599f3d361287516af3964d525dc81d71338d6dfc4d20254c4f10092.scope - libcontainer container 93dfabfdf599f3d361287516af3964d525dc81d71338d6dfc4d20254c4f10092. Jan 16 21:19:54.401775 containerd[1592]: time="2026-01-16T21:19:54.400485172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69d7bbfd7b-d4gjx,Uid:e6ad2071-4474-4750-82bb-9aae8f539dd2,Namespace:calico-system,Attempt:0,} returns sandbox id \"d8ddae9646ee33d3b3f2304300d16d1832ddd2cdb92ca5f5895458d97319b901\"" Jan 16 21:19:54.406883 kubelet[2889]: E0116 21:19:54.403137 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:19:54.410971 containerd[1592]: time="2026-01-16T21:19:54.408810181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 16 21:19:54.517000 audit: BPF prog-id=156 op=LOAD Jan 16 21:19:54.531000 audit: BPF prog-id=157 op=LOAD Jan 16 21:19:54.531000 audit[3464]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=3426 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:54.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933646661626664663539396633643336313238373531366166333936 Jan 16 21:19:54.531000 audit: BPF prog-id=157 op=UNLOAD Jan 16 21:19:54.531000 audit[3464]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3426 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:54.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933646661626664663539396633643336313238373531366166333936 Jan 16 21:19:54.531000 audit: BPF prog-id=158 op=LOAD Jan 16 21:19:54.531000 audit[3464]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3426 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:54.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933646661626664663539396633643336313238373531366166333936 Jan 16 21:19:54.531000 audit: BPF prog-id=159 op=LOAD Jan 16 21:19:54.531000 audit[3464]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3426 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:54.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933646661626664663539396633643336313238373531366166333936 Jan 16 21:19:54.532000 audit: BPF prog-id=159 op=UNLOAD Jan 16 21:19:54.532000 audit[3464]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3426 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:54.532000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933646661626664663539396633643336313238373531366166333936 Jan 16 21:19:54.532000 audit: BPF prog-id=158 op=UNLOAD Jan 16 21:19:54.532000 audit[3464]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3426 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:54.532000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933646661626664663539396633643336313238373531366166333936 Jan 16 21:19:54.532000 audit: BPF prog-id=160 op=LOAD Jan 16 21:19:54.532000 audit[3464]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3426 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:19:54.532000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933646661626664663539396633643336313238373531366166333936 Jan 16 21:19:54.826208 containerd[1592]: time="2026-01-16T21:19:54.822528704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dmn8q,Uid:21b4afca-27f1-4cd7-a28c-a4614d8d099b,Namespace:calico-system,Attempt:0,} returns sandbox id \"93dfabfdf599f3d361287516af3964d525dc81d71338d6dfc4d20254c4f10092\"" Jan 16 21:19:54.836100 kubelet[2889]: E0116 21:19:54.835735 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:19:55.057182 kubelet[2889]: E0116 21:19:55.057041 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:19:55.738010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2182960049.mount: Deactivated successfully. Jan 16 21:19:57.075143 kubelet[2889]: E0116 21:19:57.073039 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:19:59.056178 kubelet[2889]: E0116 21:19:59.056015 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:20:00.918206 containerd[1592]: time="2026-01-16T21:20:00.917727066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:20:00.921440 containerd[1592]: time="2026-01-16T21:20:00.921401201Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35230631" Jan 16 21:20:00.925003 containerd[1592]: time="2026-01-16T21:20:00.924938413Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:20:00.932945 containerd[1592]: time="2026-01-16T21:20:00.932914502Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:20:00.934145 containerd[1592]: time="2026-01-16T21:20:00.933958053Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 6.525015988s" Jan 16 21:20:00.934145 containerd[1592]: time="2026-01-16T21:20:00.933993199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 16 21:20:00.940664 containerd[1592]: time="2026-01-16T21:20:00.939064273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 16 21:20:00.999039 containerd[1592]: time="2026-01-16T21:20:00.996175277Z" level=info msg="CreateContainer within sandbox \"d8ddae9646ee33d3b3f2304300d16d1832ddd2cdb92ca5f5895458d97319b901\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 16 21:20:01.051746 containerd[1592]: time="2026-01-16T21:20:01.050684139Z" level=info msg="Container 938f0aa2de46295f5bb9c4271a0787f600337b67bf976fefab68c9758a26b40a: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:20:01.055793 kubelet[2889]: E0116 21:20:01.054971 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:20:01.109862 containerd[1592]: time="2026-01-16T21:20:01.108506952Z" level=info msg="CreateContainer within sandbox \"d8ddae9646ee33d3b3f2304300d16d1832ddd2cdb92ca5f5895458d97319b901\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"938f0aa2de46295f5bb9c4271a0787f600337b67bf976fefab68c9758a26b40a\"" Jan 16 21:20:01.114801 containerd[1592]: time="2026-01-16T21:20:01.114091988Z" level=info msg="StartContainer for \"938f0aa2de46295f5bb9c4271a0787f600337b67bf976fefab68c9758a26b40a\"" Jan 16 21:20:01.122762 containerd[1592]: time="2026-01-16T21:20:01.122720194Z" level=info msg="connecting to shim 938f0aa2de46295f5bb9c4271a0787f600337b67bf976fefab68c9758a26b40a" address="unix:///run/containerd/s/5672c3c0cb8791d3f62cdba55308c21a98679adaec53257a237501b400c46281" protocol=ttrpc version=3 Jan 16 21:20:01.385827 systemd[1]: Started cri-containerd-938f0aa2de46295f5bb9c4271a0787f600337b67bf976fefab68c9758a26b40a.scope - libcontainer container 938f0aa2de46295f5bb9c4271a0787f600337b67bf976fefab68c9758a26b40a. Jan 16 21:20:01.496900 kernel: kauditd_printk_skb: 46 callbacks suppressed Jan 16 21:20:01.497047 kernel: audit: type=1334 audit(1768598401.471:546): prog-id=161 op=LOAD Jan 16 21:20:01.471000 audit: BPF prog-id=161 op=LOAD Jan 16 21:20:01.507000 audit: BPF prog-id=162 op=LOAD Jan 16 21:20:01.527689 kernel: audit: type=1334 audit(1768598401.507:547): prog-id=162 op=LOAD Jan 16 21:20:01.507000 audit[3510]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=3330 pid=3510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:01.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933386630616132646534363239356635626239633432373161303738 Jan 16 21:20:01.630489 kernel: audit: type=1300 audit(1768598401.507:547): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=3330 pid=3510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:01.630739 kernel: audit: type=1327 audit(1768598401.507:547): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933386630616132646534363239356635626239633432373161303738 Jan 16 21:20:01.630922 kernel: audit: type=1334 audit(1768598401.515:548): prog-id=162 op=UNLOAD Jan 16 21:20:01.515000 audit: BPF prog-id=162 op=UNLOAD Jan 16 21:20:01.515000 audit[3510]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3330 pid=3510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:01.713784 kernel: audit: type=1300 audit(1768598401.515:548): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3330 pid=3510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:01.715917 kernel: audit: type=1327 audit(1768598401.515:548): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933386630616132646534363239356635626239633432373161303738 Jan 16 21:20:01.515000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933386630616132646534363239356635626239633432373161303738 Jan 16 21:20:01.518000 audit: BPF prog-id=163 op=LOAD Jan 16 21:20:01.787877 kernel: audit: type=1334 audit(1768598401.518:549): prog-id=163 op=LOAD Jan 16 21:20:01.787987 kernel: audit: type=1300 audit(1768598401.518:549): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3330 pid=3510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:01.518000 audit[3510]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3330 pid=3510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:01.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933386630616132646534363239356635626239633432373161303738 Jan 16 21:20:01.859917 containerd[1592]: time="2026-01-16T21:20:01.859785197Z" level=info msg="StartContainer for \"938f0aa2de46295f5bb9c4271a0787f600337b67bf976fefab68c9758a26b40a\" returns successfully" Jan 16 21:20:01.894470 kernel: audit: type=1327 audit(1768598401.518:549): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933386630616132646534363239356635626239633432373161303738 Jan 16 21:20:01.519000 audit: BPF prog-id=164 op=LOAD Jan 16 21:20:01.519000 audit[3510]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=3330 pid=3510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:01.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933386630616132646534363239356635626239633432373161303738 Jan 16 21:20:01.519000 audit: BPF prog-id=164 op=UNLOAD Jan 16 21:20:01.519000 audit[3510]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3330 pid=3510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:01.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933386630616132646534363239356635626239633432373161303738 Jan 16 21:20:01.519000 audit: BPF prog-id=163 op=UNLOAD Jan 16 21:20:01.519000 audit[3510]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3330 pid=3510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:01.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933386630616132646534363239356635626239633432373161303738 Jan 16 21:20:01.519000 audit: BPF prog-id=165 op=LOAD Jan 16 21:20:01.519000 audit[3510]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=3330 pid=3510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:01.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933386630616132646534363239356635626239633432373161303738 Jan 16 21:20:02.223962 containerd[1592]: time="2026-01-16T21:20:02.221959923Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:20:02.230399 containerd[1592]: time="2026-01-16T21:20:02.229953500Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 16 21:20:02.241112 containerd[1592]: time="2026-01-16T21:20:02.240969352Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:20:02.264513 containerd[1592]: time="2026-01-16T21:20:02.262867253Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:20:02.266919 containerd[1592]: time="2026-01-16T21:20:02.266727759Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.32762841s" Jan 16 21:20:02.266919 containerd[1592]: time="2026-01-16T21:20:02.266896634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 16 21:20:02.289411 containerd[1592]: time="2026-01-16T21:20:02.289098634Z" level=info msg="CreateContainer within sandbox \"93dfabfdf599f3d361287516af3964d525dc81d71338d6dfc4d20254c4f10092\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 16 21:20:02.354431 containerd[1592]: time="2026-01-16T21:20:02.354118830Z" level=info msg="Container 7996387bf9d54c3938377ff06457b3afb0b655e9eafbb626d1a02bd3dc075688: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:20:02.360785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1626473788.mount: Deactivated successfully. Jan 16 21:20:02.397923 containerd[1592]: time="2026-01-16T21:20:02.397757616Z" level=info msg="CreateContainer within sandbox \"93dfabfdf599f3d361287516af3964d525dc81d71338d6dfc4d20254c4f10092\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7996387bf9d54c3938377ff06457b3afb0b655e9eafbb626d1a02bd3dc075688\"" Jan 16 21:20:02.407938 containerd[1592]: time="2026-01-16T21:20:02.407132232Z" level=info msg="StartContainer for \"7996387bf9d54c3938377ff06457b3afb0b655e9eafbb626d1a02bd3dc075688\"" Jan 16 21:20:02.413022 containerd[1592]: time="2026-01-16T21:20:02.411250316Z" level=info msg="connecting to shim 7996387bf9d54c3938377ff06457b3afb0b655e9eafbb626d1a02bd3dc075688" address="unix:///run/containerd/s/845e5825ed337eaa9272fdac0a609c1128408f580483388cca82c41ba5e13712" protocol=ttrpc version=3 Jan 16 21:20:02.507807 kubelet[2889]: E0116 21:20:02.506526 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:20:02.577959 kubelet[2889]: E0116 21:20:02.577077 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:20:02.577959 kubelet[2889]: W0116 21:20:02.577113 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:20:02.577959 kubelet[2889]: E0116 21:20:02.577138 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:20:02.582955 kubelet[2889]: E0116 21:20:02.581494 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:20:02.582955 kubelet[2889]: W0116 21:20:02.581519 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:20:02.582955 kubelet[2889]: E0116 21:20:02.581656 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:20:02.583873 kubelet[2889]: E0116 21:20:02.583728 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:20:02.583873 kubelet[2889]: W0116 21:20:02.583866 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:20:02.583972 kubelet[2889]: E0116 21:20:02.583886 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:20:02.585479 kubelet[2889]: E0116 21:20:02.584223 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:20:02.585479 kubelet[2889]: W0116 21:20:02.584240 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:20:02.585479 kubelet[2889]: E0116 21:20:02.584489 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:20:02.585479 kubelet[2889]: E0116 21:20:02.584875 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:20:02.585479 kubelet[2889]: W0116 21:20:02.584885 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:20:02.585479 kubelet[2889]: E0116 21:20:02.584898 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:20:02.585479 kubelet[2889]: E0116 21:20:02.585136 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:20:02.585479 kubelet[2889]: W0116 21:20:02.585149 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:20:02.585479 kubelet[2889]: E0116 21:20:02.585161 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:20:02.585956 kubelet[2889]: E0116 21:20:02.585824 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:20:02.585956 kubelet[2889]: W0116 21:20:02.585835 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:20:02.585956 kubelet[2889]: E0116 21:20:02.585848 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:20:02.588461 kubelet[2889]: E0116 21:20:02.586087 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:20:02.588461 kubelet[2889]: W0116 21:20:02.586105 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:20:02.588461 kubelet[2889]: E0116 21:20:02.586116 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:20:02.588461 kubelet[2889]: E0116 21:20:02.586896 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:20:02.588461 kubelet[2889]: W0116 21:20:02.586906 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:20:02.588461 kubelet[2889]: E0116 21:20:02.586914 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:20:02.588461 kubelet[2889]: E0116 21:20:02.587151 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:20:02.588461 kubelet[2889]: W0116 21:20:02.587163 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:20:02.588461 kubelet[2889]: E0116 21:20:02.587175 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:20:02.588461 kubelet[2889]: E0116 21:20:02.587808 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:20:02.588936 kubelet[2889]: W0116 21:20:02.587817 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:20:02.588936 kubelet[2889]: E0116 21:20:02.587826 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:20:02.588936 kubelet[2889]: E0116 21:20:02.588024 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:20:02.588936 kubelet[2889]: W0116 21:20:02.588036 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:20:02.588936 kubelet[2889]: E0116 21:20:02.588050 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:20:02.588936 kubelet[2889]: E0116 21:20:02.588463 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:20:02.588936 kubelet[2889]: W0116 21:20:02.588473 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:20:02.588936 kubelet[2889]: E0116 21:20:02.588482 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:20:02.588936 kubelet[2889]: E0116 21:20:02.588783 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:20:02.588936 kubelet[2889]: W0116 21:20:02.588794 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:20:02.589229 kubelet[2889]: E0116 21:20:02.588807 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:20:02.589229 kubelet[2889]: E0116 21:20:02.589010 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:20:02.589229 kubelet[2889]: W0116 21:20:02.589020 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:20:02.589229 kubelet[2889]: E0116 21:20:02.589033 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:20:02.654785 kubelet[2889]: E0116 21:20:02.654064 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:20:02.654785 kubelet[2889]: W0116 21:20:02.654090 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:20:02.654785 kubelet[2889]: E0116 21:20:02.654112 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:20:02.655115 kubelet[2889]: E0116 21:20:02.654994 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:20:02.655115 kubelet[2889]: W0116 21:20:02.655007 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:20:02.655115 kubelet[2889]: E0116 21:20:02.655020 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:20:02.667249 kubelet[2889]: E0116 21:20:02.662760 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:20:02.667249 kubelet[2889]: W0116 21:20:02.662781 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:20:02.667249 kubelet[2889]: E0116 21:20:02.662799 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:20:02.667249 kubelet[2889]: E0116 21:20:02.665707 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:20:02.667249 kubelet[2889]: W0116 21:20:02.665722 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:20:02.667249 kubelet[2889]: E0116 21:20:02.665741 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:20:02.667190 systemd[1]: Started cri-containerd-7996387bf9d54c3938377ff06457b3afb0b655e9eafbb626d1a02bd3dc075688.scope - libcontainer container 7996387bf9d54c3938377ff06457b3afb0b655e9eafbb626d1a02bd3dc075688. Jan 16 21:20:02.676901 kubelet[2889]: E0116 21:20:02.676696 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:20:02.676901 kubelet[2889]: W0116 21:20:02.676722 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:20:02.676901 kubelet[2889]: E0116 21:20:02.676740 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:20:02.683021 kubelet[2889]: E0116 21:20:02.682066 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:20:02.683021 kubelet[2889]: W0116 21:20:02.682192 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:20:02.683021 kubelet[2889]: E0116 21:20:02.682209 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:20:02.688848 kubelet[2889]: E0116 21:20:02.687944 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:20:02.688848 kubelet[2889]: W0116 21:20:02.688068 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:20:02.688848 kubelet[2889]: E0116 21:20:02.688087 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:20:02.692680 kubelet[2889]: E0116 21:20:02.691128 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:20:02.692680 kubelet[2889]: W0116 21:20:02.692492 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:20:02.692680 kubelet[2889]: E0116 21:20:02.692528 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:20:02.699183 kubelet[2889]: E0116 21:20:02.699165 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:20:02.699651 kubelet[2889]: W0116 21:20:02.699516 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:20:02.699750 kubelet[2889]: E0116 21:20:02.699733 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:20:02.702776 kubelet[2889]: E0116 21:20:02.702758 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:20:02.708685 kubelet[2889]: W0116 21:20:02.708664 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:20:02.708888 kubelet[2889]: E0116 21:20:02.708772 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:20:02.712798 kubelet[2889]: E0116 21:20:02.712210 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:20:02.712798 kubelet[2889]: W0116 21:20:02.712227 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:20:02.712798 kubelet[2889]: E0116 21:20:02.712240 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:20:02.713219 kubelet[2889]: E0116 21:20:02.713204 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:20:02.714050 kubelet[2889]: W0116 21:20:02.713514 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:20:02.714050 kubelet[2889]: E0116 21:20:02.713658 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:20:02.720929 kubelet[2889]: E0116 21:20:02.720911 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:20:02.721062 kubelet[2889]: W0116 21:20:02.721024 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:20:02.721062 kubelet[2889]: E0116 21:20:02.721045 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:20:02.725008 kubelet[2889]: E0116 21:20:02.724082 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:20:02.725008 kubelet[2889]: W0116 21:20:02.724094 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:20:02.725008 kubelet[2889]: E0116 21:20:02.724104 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:20:02.727711 kubelet[2889]: E0116 21:20:02.727694 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:20:02.727807 kubelet[2889]: W0116 21:20:02.727791 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:20:02.727880 kubelet[2889]: E0116 21:20:02.727866 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:20:02.731872 kubelet[2889]: E0116 21:20:02.731850 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:20:02.731964 kubelet[2889]: W0116 21:20:02.731944 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:20:02.732050 kubelet[2889]: E0116 21:20:02.732032 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:20:02.735165 kubelet[2889]: E0116 21:20:02.734720 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:20:02.735165 kubelet[2889]: W0116 21:20:02.734732 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:20:02.735165 kubelet[2889]: E0116 21:20:02.734744 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:20:02.738947 kubelet[2889]: E0116 21:20:02.738904 2889 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:20:02.738947 kubelet[2889]: W0116 21:20:02.738917 2889 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:20:02.738947 kubelet[2889]: E0116 21:20:02.738927 2889 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:20:02.903000 audit: BPF prog-id=166 op=LOAD Jan 16 21:20:02.903000 audit[3551]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3426 pid=3551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:02.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739393633383762663964353463333933383337376666303634353762 Jan 16 21:20:02.903000 audit: BPF prog-id=167 op=LOAD Jan 16 21:20:02.903000 audit[3551]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3426 pid=3551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:02.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739393633383762663964353463333933383337376666303634353762 Jan 16 21:20:02.903000 audit: BPF prog-id=167 op=UNLOAD Jan 16 21:20:02.903000 audit[3551]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3426 pid=3551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:02.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739393633383762663964353463333933383337376666303634353762 Jan 16 21:20:02.903000 audit: BPF prog-id=166 op=UNLOAD Jan 16 21:20:02.903000 audit[3551]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3426 pid=3551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:02.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739393633383762663964353463333933383337376666303634353762 Jan 16 21:20:02.903000 audit: BPF prog-id=168 op=LOAD Jan 16 21:20:02.903000 audit[3551]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3426 pid=3551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:02.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739393633383762663964353463333933383337376666303634353762 Jan 16 21:20:03.063737 kubelet[2889]: E0116 21:20:03.056862 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:20:03.149679 containerd[1592]: time="2026-01-16T21:20:03.148179161Z" level=info msg="StartContainer for \"7996387bf9d54c3938377ff06457b3afb0b655e9eafbb626d1a02bd3dc075688\" returns successfully" Jan 16 21:20:03.325763 systemd[1]: cri-containerd-7996387bf9d54c3938377ff06457b3afb0b655e9eafbb626d1a02bd3dc075688.scope: Deactivated successfully. Jan 16 21:20:03.330000 audit: BPF prog-id=168 op=UNLOAD Jan 16 21:20:03.364977 containerd[1592]: time="2026-01-16T21:20:03.364918233Z" level=info msg="received container exit event container_id:\"7996387bf9d54c3938377ff06457b3afb0b655e9eafbb626d1a02bd3dc075688\" id:\"7996387bf9d54c3938377ff06457b3afb0b655e9eafbb626d1a02bd3dc075688\" pid:3587 exited_at:{seconds:1768598403 nanos:363227818}" Jan 16 21:20:03.534118 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7996387bf9d54c3938377ff06457b3afb0b655e9eafbb626d1a02bd3dc075688-rootfs.mount: Deactivated successfully. Jan 16 21:20:03.547690 kubelet[2889]: E0116 21:20:03.543477 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:20:03.552738 kubelet[2889]: E0116 21:20:03.552438 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:20:03.634507 kubelet[2889]: I0116 21:20:03.632064 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-69d7bbfd7b-d4gjx" podStartSLOduration=5.101401981 podStartE2EDuration="11.632042639s" podCreationTimestamp="2026-01-16 21:19:52 +0000 UTC" firstStartedPulling="2026-01-16 21:19:54.408049306 +0000 UTC m=+58.966352267" lastFinishedPulling="2026-01-16 21:20:00.938689964 +0000 UTC m=+65.496992925" observedRunningTime="2026-01-16 21:20:02.646856566 +0000 UTC m=+67.205159537" watchObservedRunningTime="2026-01-16 21:20:03.632042639 +0000 UTC m=+68.190345620" Jan 16 21:20:03.774000 audit[3641]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=3641 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:20:03.774000 audit[3641]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff66e705b0 a2=0 a3=7fff66e7059c items=0 ppid=3054 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:03.774000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:20:03.782000 audit[3641]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=3641 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:20:03.782000 audit[3641]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff66e705b0 a2=0 a3=7fff66e7059c items=0 ppid=3054 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:03.782000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:20:04.561046 kubelet[2889]: E0116 21:20:04.558714 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:20:04.561046 kubelet[2889]: E0116 21:20:04.559057 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:20:04.566825 containerd[1592]: time="2026-01-16T21:20:04.566771167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 16 21:20:05.056720 kubelet[2889]: E0116 21:20:05.055163 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:20:05.573467 kubelet[2889]: E0116 21:20:05.573203 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:20:07.055163 kubelet[2889]: E0116 21:20:07.054861 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:20:09.055700 kubelet[2889]: E0116 21:20:09.054799 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:20:11.057882 kubelet[2889]: E0116 21:20:11.056962 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:20:13.059980 kubelet[2889]: E0116 21:20:13.054998 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:20:15.060212 kubelet[2889]: E0116 21:20:15.060152 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:20:15.064510 kubelet[2889]: E0116 21:20:15.063209 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:20:15.942750 containerd[1592]: time="2026-01-16T21:20:15.940964345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:20:15.948056 containerd[1592]: time="2026-01-16T21:20:15.948016417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 16 21:20:15.953494 containerd[1592]: time="2026-01-16T21:20:15.953232875Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:20:15.967520 containerd[1592]: time="2026-01-16T21:20:15.967237938Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:20:15.968943 containerd[1592]: time="2026-01-16T21:20:15.968749395Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 11.400630711s" Jan 16 21:20:15.968943 containerd[1592]: time="2026-01-16T21:20:15.968788147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 16 21:20:16.008079 containerd[1592]: time="2026-01-16T21:20:16.006070074Z" level=info msg="CreateContainer within sandbox \"93dfabfdf599f3d361287516af3964d525dc81d71338d6dfc4d20254c4f10092\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 16 21:20:16.059788 containerd[1592]: time="2026-01-16T21:20:16.057768451Z" level=info msg="Container 04b624b2d7b4c8475296bb2f4bcf15051f284926173db5c8812ae46071a62179: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:20:16.106768 containerd[1592]: time="2026-01-16T21:20:16.106012643Z" level=info msg="CreateContainer within sandbox \"93dfabfdf599f3d361287516af3964d525dc81d71338d6dfc4d20254c4f10092\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"04b624b2d7b4c8475296bb2f4bcf15051f284926173db5c8812ae46071a62179\"" Jan 16 21:20:16.109728 containerd[1592]: time="2026-01-16T21:20:16.109693867Z" level=info msg="StartContainer for \"04b624b2d7b4c8475296bb2f4bcf15051f284926173db5c8812ae46071a62179\"" Jan 16 21:20:16.115205 containerd[1592]: time="2026-01-16T21:20:16.115170734Z" level=info msg="connecting to shim 04b624b2d7b4c8475296bb2f4bcf15051f284926173db5c8812ae46071a62179" address="unix:///run/containerd/s/845e5825ed337eaa9272fdac0a609c1128408f580483388cca82c41ba5e13712" protocol=ttrpc version=3 Jan 16 21:20:16.290479 systemd[1]: Started cri-containerd-04b624b2d7b4c8475296bb2f4bcf15051f284926173db5c8812ae46071a62179.scope - libcontainer container 04b624b2d7b4c8475296bb2f4bcf15051f284926173db5c8812ae46071a62179. Jan 16 21:20:16.478525 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 16 21:20:16.483933 kernel: audit: type=1334 audit(1768598416.461:562): prog-id=169 op=LOAD Jan 16 21:20:16.461000 audit: BPF prog-id=169 op=LOAD Jan 16 21:20:16.461000 audit[3652]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3426 pid=3652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:16.542024 kernel: audit: type=1300 audit(1768598416.461:562): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3426 pid=3652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:16.461000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034623632346232643762346338343735323936626232663462636631 Jan 16 21:20:16.603901 kernel: audit: type=1327 audit(1768598416.461:562): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034623632346232643762346338343735323936626232663462636631 Jan 16 21:20:16.604049 kernel: audit: type=1334 audit(1768598416.463:563): prog-id=170 op=LOAD Jan 16 21:20:16.463000 audit: BPF prog-id=170 op=LOAD Jan 16 21:20:16.463000 audit[3652]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3426 pid=3652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:16.695684 kernel: audit: type=1300 audit(1768598416.463:563): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3426 pid=3652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:16.463000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034623632346232643762346338343735323936626232663462636631 Jan 16 21:20:16.760078 kernel: audit: type=1327 audit(1768598416.463:563): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034623632346232643762346338343735323936626232663462636631 Jan 16 21:20:16.760215 kernel: audit: type=1334 audit(1768598416.463:564): prog-id=170 op=UNLOAD Jan 16 21:20:16.463000 audit: BPF prog-id=170 op=UNLOAD Jan 16 21:20:16.463000 audit[3652]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3426 pid=3652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:16.830875 kernel: audit: type=1300 audit(1768598416.463:564): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3426 pid=3652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:16.831014 kernel: audit: type=1327 audit(1768598416.463:564): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034623632346232643762346338343735323936626232663462636631 Jan 16 21:20:16.463000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034623632346232643762346338343735323936626232663462636631 Jan 16 21:20:16.463000 audit: BPF prog-id=169 op=UNLOAD Jan 16 21:20:16.907962 kernel: audit: type=1334 audit(1768598416.463:565): prog-id=169 op=UNLOAD Jan 16 21:20:16.463000 audit[3652]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3426 pid=3652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:16.463000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034623632346232643762346338343735323936626232663462636631 Jan 16 21:20:16.463000 audit: BPF prog-id=171 op=LOAD Jan 16 21:20:16.463000 audit[3652]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3426 pid=3652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:16.463000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034623632346232643762346338343735323936626232663462636631 Jan 16 21:20:16.925905 containerd[1592]: time="2026-01-16T21:20:16.925234604Z" level=info msg="StartContainer for \"04b624b2d7b4c8475296bb2f4bcf15051f284926173db5c8812ae46071a62179\" returns successfully" Jan 16 21:20:17.060146 kubelet[2889]: E0116 21:20:17.056140 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:20:17.061892 kubelet[2889]: E0116 21:20:17.061862 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:20:17.788969 kubelet[2889]: E0116 21:20:17.788783 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:20:18.802131 kubelet[2889]: E0116 21:20:18.801225 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:20:19.058731 kubelet[2889]: E0116 21:20:19.056160 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:20:19.058731 kubelet[2889]: E0116 21:20:19.057804 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:20:20.080753 kubelet[2889]: E0116 21:20:20.076789 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:20:21.064974 kubelet[2889]: E0116 21:20:21.062992 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:20:23.055248 kubelet[2889]: E0116 21:20:23.054899 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:20:24.019231 systemd[1]: cri-containerd-04b624b2d7b4c8475296bb2f4bcf15051f284926173db5c8812ae46071a62179.scope: Deactivated successfully. Jan 16 21:20:24.022158 systemd[1]: cri-containerd-04b624b2d7b4c8475296bb2f4bcf15051f284926173db5c8812ae46071a62179.scope: Consumed 5.311s CPU time, 180.5M memory peak, 3.2M read from disk, 171.3M written to disk. Jan 16 21:20:24.027166 containerd[1592]: time="2026-01-16T21:20:24.026943865Z" level=info msg="received container exit event container_id:\"04b624b2d7b4c8475296bb2f4bcf15051f284926173db5c8812ae46071a62179\" id:\"04b624b2d7b4c8475296bb2f4bcf15051f284926173db5c8812ae46071a62179\" pid:3665 exited_at:{seconds:1768598424 nanos:24095648}" Jan 16 21:20:24.027000 audit: BPF prog-id=171 op=UNLOAD Jan 16 21:20:24.041860 kernel: kauditd_printk_skb: 5 callbacks suppressed Jan 16 21:20:24.041957 kernel: audit: type=1334 audit(1768598424.027:567): prog-id=171 op=UNLOAD Jan 16 21:20:24.176996 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04b624b2d7b4c8475296bb2f4bcf15051f284926173db5c8812ae46071a62179-rootfs.mount: Deactivated successfully. Jan 16 21:20:24.182698 kubelet[2889]: I0116 21:20:24.180186 2889 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 16 21:20:24.513844 systemd[1]: Created slice kubepods-besteffort-podf80ed623_af1a_45e7_a125_0c7c2229f592.slice - libcontainer container kubepods-besteffort-podf80ed623_af1a_45e7_a125_0c7c2229f592.slice. Jan 16 21:20:24.581104 kubelet[2889]: I0116 21:20:24.579970 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0752a7cf-5d05-4249-a029-fa96ed25d7e6-config-volume\") pod \"coredns-66bc5c9577-s5mq8\" (UID: \"0752a7cf-5d05-4249-a029-fa96ed25d7e6\") " pod="kube-system/coredns-66bc5c9577-s5mq8" Jan 16 21:20:24.581104 kubelet[2889]: I0116 21:20:24.580149 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91f2e5a0-0976-4b7a-ac63-530715dff408-config\") pod \"goldmane-7c778bb748-dgkw9\" (UID: \"91f2e5a0-0976-4b7a-ac63-530715dff408\") " pod="calico-system/goldmane-7c778bb748-dgkw9" Jan 16 21:20:24.581104 kubelet[2889]: I0116 21:20:24.580175 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/91f2e5a0-0976-4b7a-ac63-530715dff408-goldmane-key-pair\") pod \"goldmane-7c778bb748-dgkw9\" (UID: \"91f2e5a0-0976-4b7a-ac63-530715dff408\") " pod="calico-system/goldmane-7c778bb748-dgkw9" Jan 16 21:20:24.581104 kubelet[2889]: I0116 21:20:24.580198 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8v8b\" (UniqueName: \"kubernetes.io/projected/0752a7cf-5d05-4249-a029-fa96ed25d7e6-kube-api-access-m8v8b\") pod \"coredns-66bc5c9577-s5mq8\" (UID: \"0752a7cf-5d05-4249-a029-fa96ed25d7e6\") " pod="kube-system/coredns-66bc5c9577-s5mq8" Jan 16 21:20:24.581104 kubelet[2889]: I0116 21:20:24.580217 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91f2e5a0-0976-4b7a-ac63-530715dff408-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-dgkw9\" (UID: \"91f2e5a0-0976-4b7a-ac63-530715dff408\") " pod="calico-system/goldmane-7c778bb748-dgkw9" Jan 16 21:20:24.585859 kubelet[2889]: I0116 21:20:24.580240 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z982\" (UniqueName: \"kubernetes.io/projected/ac06912f-e290-4031-a848-1392298fa9de-kube-api-access-2z982\") pod \"calico-apiserver-6cb5984987-nz6bj\" (UID: \"ac06912f-e290-4031-a848-1392298fa9de\") " pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" Jan 16 21:20:24.585859 kubelet[2889]: I0116 21:20:24.580776 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f80ed623-af1a-45e7-a125-0c7c2229f592-tigera-ca-bundle\") pod \"calico-kube-controllers-7c7c579b5f-dsr85\" (UID: \"f80ed623-af1a-45e7-a125-0c7c2229f592\") " pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" Jan 16 21:20:24.585859 kubelet[2889]: I0116 21:20:24.580811 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ac06912f-e290-4031-a848-1392298fa9de-calico-apiserver-certs\") pod \"calico-apiserver-6cb5984987-nz6bj\" (UID: \"ac06912f-e290-4031-a848-1392298fa9de\") " pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" Jan 16 21:20:24.585859 kubelet[2889]: I0116 21:20:24.580832 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v7vk\" (UniqueName: \"kubernetes.io/projected/91f2e5a0-0976-4b7a-ac63-530715dff408-kube-api-access-4v7vk\") pod \"goldmane-7c778bb748-dgkw9\" (UID: \"91f2e5a0-0976-4b7a-ac63-530715dff408\") " pod="calico-system/goldmane-7c778bb748-dgkw9" Jan 16 21:20:24.585859 kubelet[2889]: I0116 21:20:24.580856 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lr7t\" (UniqueName: \"kubernetes.io/projected/f80ed623-af1a-45e7-a125-0c7c2229f592-kube-api-access-7lr7t\") pod \"calico-kube-controllers-7c7c579b5f-dsr85\" (UID: \"f80ed623-af1a-45e7-a125-0c7c2229f592\") " pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" Jan 16 21:20:24.583237 systemd[1]: Created slice kubepods-burstable-pod0752a7cf_5d05_4249_a029_fa96ed25d7e6.slice - libcontainer container kubepods-burstable-pod0752a7cf_5d05_4249_a029_fa96ed25d7e6.slice. Jan 16 21:20:24.661130 systemd[1]: Created slice kubepods-besteffort-pod91f2e5a0_0976_4b7a_ac63_530715dff408.slice - libcontainer container kubepods-besteffort-pod91f2e5a0_0976_4b7a_ac63_530715dff408.slice. Jan 16 21:20:24.728674 systemd[1]: Created slice kubepods-besteffort-podac06912f_e290_4031_a848_1392298fa9de.slice - libcontainer container kubepods-besteffort-podac06912f_e290_4031_a848_1392298fa9de.slice. Jan 16 21:20:24.787856 kubelet[2889]: I0116 21:20:24.786782 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7e12a227-9190-436c-a55f-74274779eb32-calico-apiserver-certs\") pod \"calico-apiserver-6cb5984987-cb6w2\" (UID: \"7e12a227-9190-436c-a55f-74274779eb32\") " pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" Jan 16 21:20:24.787856 kubelet[2889]: I0116 21:20:24.786834 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bfca5e6-567f-4d15-9e56-b6d59ebbc722-whisker-ca-bundle\") pod \"whisker-7b9cdf6556-bhdzz\" (UID: \"9bfca5e6-567f-4d15-9e56-b6d59ebbc722\") " pod="calico-system/whisker-7b9cdf6556-bhdzz" Jan 16 21:20:24.787856 kubelet[2889]: I0116 21:20:24.786855 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/551adc74-4bae-43fc-8b67-656cbc70d543-config-volume\") pod \"coredns-66bc5c9577-j4h6j\" (UID: \"551adc74-4bae-43fc-8b67-656cbc70d543\") " pod="kube-system/coredns-66bc5c9577-j4h6j" Jan 16 21:20:24.787856 kubelet[2889]: I0116 21:20:24.786887 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9bfca5e6-567f-4d15-9e56-b6d59ebbc722-whisker-backend-key-pair\") pod \"whisker-7b9cdf6556-bhdzz\" (UID: \"9bfca5e6-567f-4d15-9e56-b6d59ebbc722\") " pod="calico-system/whisker-7b9cdf6556-bhdzz" Jan 16 21:20:24.787856 kubelet[2889]: I0116 21:20:24.786906 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6f7h\" (UniqueName: \"kubernetes.io/projected/9bfca5e6-567f-4d15-9e56-b6d59ebbc722-kube-api-access-h6f7h\") pod \"whisker-7b9cdf6556-bhdzz\" (UID: \"9bfca5e6-567f-4d15-9e56-b6d59ebbc722\") " pod="calico-system/whisker-7b9cdf6556-bhdzz" Jan 16 21:20:24.793199 kubelet[2889]: I0116 21:20:24.786924 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sthz\" (UniqueName: \"kubernetes.io/projected/7e12a227-9190-436c-a55f-74274779eb32-kube-api-access-9sthz\") pod \"calico-apiserver-6cb5984987-cb6w2\" (UID: \"7e12a227-9190-436c-a55f-74274779eb32\") " pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" Jan 16 21:20:24.793199 kubelet[2889]: I0116 21:20:24.786946 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s258b\" (UniqueName: \"kubernetes.io/projected/551adc74-4bae-43fc-8b67-656cbc70d543-kube-api-access-s258b\") pod \"coredns-66bc5c9577-j4h6j\" (UID: \"551adc74-4bae-43fc-8b67-656cbc70d543\") " pod="kube-system/coredns-66bc5c9577-j4h6j" Jan 16 21:20:24.876529 kubelet[2889]: E0116 21:20:24.875924 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:20:24.878242 containerd[1592]: time="2026-01-16T21:20:24.878198502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 16 21:20:24.890939 systemd[1]: Created slice kubepods-besteffort-pod9bfca5e6_567f_4d15_9e56_b6d59ebbc722.slice - libcontainer container kubepods-besteffort-pod9bfca5e6_567f_4d15_9e56_b6d59ebbc722.slice. Jan 16 21:20:24.923171 systemd[1]: Created slice kubepods-besteffort-pod7e12a227_9190_436c_a55f_74274779eb32.slice - libcontainer container kubepods-besteffort-pod7e12a227_9190_436c_a55f_74274779eb32.slice. Jan 16 21:20:24.949776 systemd[1]: Created slice kubepods-burstable-pod551adc74_4bae_43fc_8b67_656cbc70d543.slice - libcontainer container kubepods-burstable-pod551adc74_4bae_43fc_8b67_656cbc70d543.slice. Jan 16 21:20:24.951830 kubelet[2889]: E0116 21:20:24.951802 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:20:24.953063 containerd[1592]: time="2026-01-16T21:20:24.953028635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-s5mq8,Uid:0752a7cf-5d05-4249-a029-fa96ed25d7e6,Namespace:kube-system,Attempt:0,}" Jan 16 21:20:25.022064 containerd[1592]: time="2026-01-16T21:20:25.021964403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-dgkw9,Uid:91f2e5a0-0976-4b7a-ac63-530715dff408,Namespace:calico-system,Attempt:0,}" Jan 16 21:20:25.073950 containerd[1592]: time="2026-01-16T21:20:25.072809664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cb5984987-nz6bj,Uid:ac06912f-e290-4031-a848-1392298fa9de,Namespace:calico-apiserver,Attempt:0,}" Jan 16 21:20:25.117744 systemd[1]: Created slice kubepods-besteffort-podb99000d7_a136_4299_82d0_76fa7e3c28f2.slice - libcontainer container kubepods-besteffort-podb99000d7_a136_4299_82d0_76fa7e3c28f2.slice. Jan 16 21:20:25.219116 containerd[1592]: time="2026-01-16T21:20:25.217908692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6hngd,Uid:b99000d7-a136-4299-82d0-76fa7e3c28f2,Namespace:calico-system,Attempt:0,}" Jan 16 21:20:25.219116 containerd[1592]: time="2026-01-16T21:20:25.218024487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c7c579b5f-dsr85,Uid:f80ed623-af1a-45e7-a125-0c7c2229f592,Namespace:calico-system,Attempt:0,}" Jan 16 21:20:25.283008 containerd[1592]: time="2026-01-16T21:20:25.279115012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b9cdf6556-bhdzz,Uid:9bfca5e6-567f-4d15-9e56-b6d59ebbc722,Namespace:calico-system,Attempt:0,}" Jan 16 21:20:25.305194 containerd[1592]: time="2026-01-16T21:20:25.304948523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cb5984987-cb6w2,Uid:7e12a227-9190-436c-a55f-74274779eb32,Namespace:calico-apiserver,Attempt:0,}" Jan 16 21:20:25.323104 kubelet[2889]: E0116 21:20:25.323068 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:20:25.343508 containerd[1592]: time="2026-01-16T21:20:25.342836900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-j4h6j,Uid:551adc74-4bae-43fc-8b67-656cbc70d543,Namespace:kube-system,Attempt:0,}" Jan 16 21:20:26.393972 containerd[1592]: time="2026-01-16T21:20:26.393184278Z" level=error msg="Failed to destroy network for sandbox \"f6a43ff5d225be3dc12aeee8e8466db5f161d13316be4595a43f783519987106\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:26.417524 systemd[1]: run-netns-cni\x2d3ce50557\x2d97aa\x2d1b25\x2d91e4\x2d6ffd9db08324.mount: Deactivated successfully. Jan 16 21:20:26.432442 containerd[1592]: time="2026-01-16T21:20:26.432147545Z" level=error msg="Failed to destroy network for sandbox \"b0914cd016c9baf56803e1651407815dc4c263b638436525798643d288307469\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:26.442905 systemd[1]: run-netns-cni\x2d2be6f07a\x2dafdd\x2dc51b\x2d321f\x2db6a17f9b8a3f.mount: Deactivated successfully. Jan 16 21:20:26.517119 containerd[1592]: time="2026-01-16T21:20:26.517003947Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6hngd,Uid:b99000d7-a136-4299-82d0-76fa7e3c28f2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0914cd016c9baf56803e1651407815dc4c263b638436525798643d288307469\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:26.522843 kubelet[2889]: E0116 21:20:26.521825 2889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0914cd016c9baf56803e1651407815dc4c263b638436525798643d288307469\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:26.522843 kubelet[2889]: E0116 21:20:26.521913 2889 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0914cd016c9baf56803e1651407815dc4c263b638436525798643d288307469\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6hngd" Jan 16 21:20:26.522843 kubelet[2889]: E0116 21:20:26.521943 2889 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0914cd016c9baf56803e1651407815dc4c263b638436525798643d288307469\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6hngd" Jan 16 21:20:26.525880 kubelet[2889]: E0116 21:20:26.522005 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6hngd_calico-system(b99000d7-a136-4299-82d0-76fa7e3c28f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6hngd_calico-system(b99000d7-a136-4299-82d0-76fa7e3c28f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b0914cd016c9baf56803e1651407815dc4c263b638436525798643d288307469\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:20:26.612883 containerd[1592]: time="2026-01-16T21:20:26.612109074Z" level=error msg="Failed to destroy network for sandbox \"7e190b3bfd1376da12b87b2a93a69228fc23e6e84ddb4b9b5a53f52dfc25adf1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:26.624829 systemd[1]: run-netns-cni\x2dcccb6c25\x2dea47\x2d30ee\x2d1cca\x2dcfe68ded9427.mount: Deactivated successfully. Jan 16 21:20:26.709807 containerd[1592]: time="2026-01-16T21:20:26.709745562Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cb5984987-cb6w2,Uid:7e12a227-9190-436c-a55f-74274779eb32,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e190b3bfd1376da12b87b2a93a69228fc23e6e84ddb4b9b5a53f52dfc25adf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:26.711128 kubelet[2889]: E0116 21:20:26.711085 2889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e190b3bfd1376da12b87b2a93a69228fc23e6e84ddb4b9b5a53f52dfc25adf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:26.713487 kubelet[2889]: E0116 21:20:26.712511 2889 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e190b3bfd1376da12b87b2a93a69228fc23e6e84ddb4b9b5a53f52dfc25adf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" Jan 16 21:20:26.713487 kubelet[2889]: E0116 21:20:26.712675 2889 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e190b3bfd1376da12b87b2a93a69228fc23e6e84ddb4b9b5a53f52dfc25adf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" Jan 16 21:20:26.714066 kubelet[2889]: E0116 21:20:26.714031 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cb5984987-cb6w2_calico-apiserver(7e12a227-9190-436c-a55f-74274779eb32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cb5984987-cb6w2_calico-apiserver(7e12a227-9190-436c-a55f-74274779eb32)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e190b3bfd1376da12b87b2a93a69228fc23e6e84ddb4b9b5a53f52dfc25adf1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" podUID="7e12a227-9190-436c-a55f-74274779eb32" Jan 16 21:20:26.720484 containerd[1592]: time="2026-01-16T21:20:26.718360608Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-dgkw9,Uid:91f2e5a0-0976-4b7a-ac63-530715dff408,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6a43ff5d225be3dc12aeee8e8466db5f161d13316be4595a43f783519987106\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:26.721020 kubelet[2889]: E0116 21:20:26.720988 2889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6a43ff5d225be3dc12aeee8e8466db5f161d13316be4595a43f783519987106\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:26.721675 kubelet[2889]: E0116 21:20:26.721153 2889 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6a43ff5d225be3dc12aeee8e8466db5f161d13316be4595a43f783519987106\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-dgkw9" Jan 16 21:20:26.721675 kubelet[2889]: E0116 21:20:26.721186 2889 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6a43ff5d225be3dc12aeee8e8466db5f161d13316be4595a43f783519987106\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-dgkw9" Jan 16 21:20:26.721675 kubelet[2889]: E0116 21:20:26.721249 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-dgkw9_calico-system(91f2e5a0-0976-4b7a-ac63-530715dff408)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-dgkw9_calico-system(91f2e5a0-0976-4b7a-ac63-530715dff408)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f6a43ff5d225be3dc12aeee8e8466db5f161d13316be4595a43f783519987106\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-dgkw9" podUID="91f2e5a0-0976-4b7a-ac63-530715dff408" Jan 16 21:20:26.737821 containerd[1592]: time="2026-01-16T21:20:26.737648339Z" level=error msg="Failed to destroy network for sandbox \"7d1e04065b3cda74e63d18c8402d6912ce167206964415de4809eb60cbb0d196\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:26.748070 systemd[1]: run-netns-cni\x2dabb46f77\x2de417\x2ddcf6\x2d08ca\x2d4ee44d23b83e.mount: Deactivated successfully. Jan 16 21:20:26.784045 containerd[1592]: time="2026-01-16T21:20:26.783027521Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-s5mq8,Uid:0752a7cf-5d05-4249-a029-fa96ed25d7e6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d1e04065b3cda74e63d18c8402d6912ce167206964415de4809eb60cbb0d196\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:26.798885 kubelet[2889]: E0116 21:20:26.784470 2889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d1e04065b3cda74e63d18c8402d6912ce167206964415de4809eb60cbb0d196\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:26.798885 kubelet[2889]: E0116 21:20:26.784523 2889 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d1e04065b3cda74e63d18c8402d6912ce167206964415de4809eb60cbb0d196\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-s5mq8" Jan 16 21:20:26.798885 kubelet[2889]: E0116 21:20:26.784655 2889 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d1e04065b3cda74e63d18c8402d6912ce167206964415de4809eb60cbb0d196\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-s5mq8" Jan 16 21:20:26.799078 kubelet[2889]: E0116 21:20:26.784702 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-s5mq8_kube-system(0752a7cf-5d05-4249-a029-fa96ed25d7e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-s5mq8_kube-system(0752a7cf-5d05-4249-a029-fa96ed25d7e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d1e04065b3cda74e63d18c8402d6912ce167206964415de4809eb60cbb0d196\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-s5mq8" podUID="0752a7cf-5d05-4249-a029-fa96ed25d7e6" Jan 16 21:20:26.804013 containerd[1592]: time="2026-01-16T21:20:26.801806358Z" level=error msg="Failed to destroy network for sandbox \"cf6b6cfe3f7b5285ff5985a063257aaca53d8a5875020cb4dac6efc91b65eb6d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:26.804013 containerd[1592]: time="2026-01-16T21:20:26.803041280Z" level=error msg="Failed to destroy network for sandbox \"eebf07e527d069e0f6555c26b2d4feba5f81d49d245e72899a7b9f4c7f94bacb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:26.811780 containerd[1592]: time="2026-01-16T21:20:26.811727758Z" level=error msg="Failed to destroy network for sandbox \"c3b2ad51568554b6957237edb7d39dd1d9fb35bcdd24971e1b76ed63b20ff91f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:26.818953 containerd[1592]: time="2026-01-16T21:20:26.818912105Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-j4h6j,Uid:551adc74-4bae-43fc-8b67-656cbc70d543,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf6b6cfe3f7b5285ff5985a063257aaca53d8a5875020cb4dac6efc91b65eb6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:26.820752 kubelet[2889]: E0116 21:20:26.820093 2889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf6b6cfe3f7b5285ff5985a063257aaca53d8a5875020cb4dac6efc91b65eb6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:26.820752 kubelet[2889]: E0116 21:20:26.820249 2889 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf6b6cfe3f7b5285ff5985a063257aaca53d8a5875020cb4dac6efc91b65eb6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-j4h6j" Jan 16 21:20:26.820752 kubelet[2889]: E0116 21:20:26.820424 2889 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf6b6cfe3f7b5285ff5985a063257aaca53d8a5875020cb4dac6efc91b65eb6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-j4h6j" Jan 16 21:20:26.822245 kubelet[2889]: E0116 21:20:26.820468 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-j4h6j_kube-system(551adc74-4bae-43fc-8b67-656cbc70d543)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-j4h6j_kube-system(551adc74-4bae-43fc-8b67-656cbc70d543)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf6b6cfe3f7b5285ff5985a063257aaca53d8a5875020cb4dac6efc91b65eb6d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-j4h6j" podUID="551adc74-4bae-43fc-8b67-656cbc70d543" Jan 16 21:20:26.822673 containerd[1592]: time="2026-01-16T21:20:26.820918259Z" level=error msg="Failed to destroy network for sandbox \"781d039e3b9a7ec951684cc43c67d76e6758222a13ad9a0d871706348ac6e729\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:26.855636 containerd[1592]: time="2026-01-16T21:20:26.853683856Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b9cdf6556-bhdzz,Uid:9bfca5e6-567f-4d15-9e56-b6d59ebbc722,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eebf07e527d069e0f6555c26b2d4feba5f81d49d245e72899a7b9f4c7f94bacb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:26.857816 kubelet[2889]: E0116 21:20:26.856860 2889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eebf07e527d069e0f6555c26b2d4feba5f81d49d245e72899a7b9f4c7f94bacb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:26.857816 kubelet[2889]: E0116 21:20:26.857014 2889 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eebf07e527d069e0f6555c26b2d4feba5f81d49d245e72899a7b9f4c7f94bacb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b9cdf6556-bhdzz" Jan 16 21:20:26.857816 kubelet[2889]: E0116 21:20:26.857035 2889 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eebf07e527d069e0f6555c26b2d4feba5f81d49d245e72899a7b9f4c7f94bacb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b9cdf6556-bhdzz" Jan 16 21:20:26.858095 kubelet[2889]: E0116 21:20:26.857078 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7b9cdf6556-bhdzz_calico-system(9bfca5e6-567f-4d15-9e56-b6d59ebbc722)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7b9cdf6556-bhdzz_calico-system(9bfca5e6-567f-4d15-9e56-b6d59ebbc722)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eebf07e527d069e0f6555c26b2d4feba5f81d49d245e72899a7b9f4c7f94bacb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7b9cdf6556-bhdzz" podUID="9bfca5e6-567f-4d15-9e56-b6d59ebbc722" Jan 16 21:20:26.860122 containerd[1592]: time="2026-01-16T21:20:26.860075039Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cb5984987-nz6bj,Uid:ac06912f-e290-4031-a848-1392298fa9de,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3b2ad51568554b6957237edb7d39dd1d9fb35bcdd24971e1b76ed63b20ff91f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:26.861630 kubelet[2889]: E0116 21:20:26.861197 2889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3b2ad51568554b6957237edb7d39dd1d9fb35bcdd24971e1b76ed63b20ff91f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:26.864089 kubelet[2889]: E0116 21:20:26.862637 2889 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3b2ad51568554b6957237edb7d39dd1d9fb35bcdd24971e1b76ed63b20ff91f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" Jan 16 21:20:26.864089 kubelet[2889]: E0116 21:20:26.862661 2889 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3b2ad51568554b6957237edb7d39dd1d9fb35bcdd24971e1b76ed63b20ff91f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" Jan 16 21:20:26.864089 kubelet[2889]: E0116 21:20:26.863058 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cb5984987-nz6bj_calico-apiserver(ac06912f-e290-4031-a848-1392298fa9de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cb5984987-nz6bj_calico-apiserver(ac06912f-e290-4031-a848-1392298fa9de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c3b2ad51568554b6957237edb7d39dd1d9fb35bcdd24971e1b76ed63b20ff91f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" podUID="ac06912f-e290-4031-a848-1392298fa9de" Jan 16 21:20:26.868493 containerd[1592]: time="2026-01-16T21:20:26.868077745Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c7c579b5f-dsr85,Uid:f80ed623-af1a-45e7-a125-0c7c2229f592,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"781d039e3b9a7ec951684cc43c67d76e6758222a13ad9a0d871706348ac6e729\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:26.873161 kubelet[2889]: E0116 21:20:26.871521 2889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"781d039e3b9a7ec951684cc43c67d76e6758222a13ad9a0d871706348ac6e729\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:26.873161 kubelet[2889]: E0116 21:20:26.871761 2889 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"781d039e3b9a7ec951684cc43c67d76e6758222a13ad9a0d871706348ac6e729\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" Jan 16 21:20:26.873161 kubelet[2889]: E0116 21:20:26.871784 2889 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"781d039e3b9a7ec951684cc43c67d76e6758222a13ad9a0d871706348ac6e729\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" Jan 16 21:20:26.873705 kubelet[2889]: E0116 21:20:26.871841 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c7c579b5f-dsr85_calico-system(f80ed623-af1a-45e7-a125-0c7c2229f592)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c7c579b5f-dsr85_calico-system(f80ed623-af1a-45e7-a125-0c7c2229f592)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"781d039e3b9a7ec951684cc43c67d76e6758222a13ad9a0d871706348ac6e729\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" podUID="f80ed623-af1a-45e7-a125-0c7c2229f592" Jan 16 21:20:27.188897 systemd[1]: run-netns-cni\x2dfedbdc06\x2d1f26\x2d6802\x2d369c\x2db134bf2a812d.mount: Deactivated successfully. Jan 16 21:20:27.192932 systemd[1]: run-netns-cni\x2d964f2ba6\x2dc96d\x2d6e66\x2d9897\x2db344fe012340.mount: Deactivated successfully. Jan 16 21:20:27.197640 systemd[1]: run-netns-cni\x2d49417e84\x2de352\x2d61f1\x2de8a3\x2d7d34bdecabe9.mount: Deactivated successfully. Jan 16 21:20:27.197754 systemd[1]: run-netns-cni\x2d413c158c\x2dbd19\x2df7a2\x2d3151\x2d0d5750cc3460.mount: Deactivated successfully. Jan 16 21:20:38.375961 kubelet[2889]: E0116 21:20:38.367664 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:20:38.987944 containerd[1592]: time="2026-01-16T21:20:38.971791559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cb5984987-nz6bj,Uid:ac06912f-e290-4031-a848-1392298fa9de,Namespace:calico-apiserver,Attempt:0,}" Jan 16 21:20:39.288203 containerd[1592]: time="2026-01-16T21:20:39.270899674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-j4h6j,Uid:551adc74-4bae-43fc-8b67-656cbc70d543,Namespace:kube-system,Attempt:0,}" Jan 16 21:20:39.318455 kubelet[2889]: E0116 21:20:39.271214 2889 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.211s" Jan 16 21:20:40.517192 containerd[1592]: time="2026-01-16T21:20:40.506108599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6hngd,Uid:b99000d7-a136-4299-82d0-76fa7e3c28f2,Namespace:calico-system,Attempt:0,}" Jan 16 21:20:40.590952 containerd[1592]: time="2026-01-16T21:20:40.590112890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cb5984987-cb6w2,Uid:7e12a227-9190-436c-a55f-74274779eb32,Namespace:calico-apiserver,Attempt:0,}" Jan 16 21:20:40.611905 containerd[1592]: time="2026-01-16T21:20:40.601747918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c7c579b5f-dsr85,Uid:f80ed623-af1a-45e7-a125-0c7c2229f592,Namespace:calico-system,Attempt:0,}" Jan 16 21:20:46.384440 kubelet[2889]: E0116 21:20:46.382087 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:20:46.477048 containerd[1592]: time="2026-01-16T21:20:46.476534564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-s5mq8,Uid:0752a7cf-5d05-4249-a029-fa96ed25d7e6,Namespace:kube-system,Attempt:0,}" Jan 16 21:20:46.531547 kubelet[2889]: E0116 21:20:46.528869 2889 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.191s" Jan 16 21:20:46.585456 containerd[1592]: time="2026-01-16T21:20:46.583050852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b9cdf6556-bhdzz,Uid:9bfca5e6-567f-4d15-9e56-b6d59ebbc722,Namespace:calico-system,Attempt:0,}" Jan 16 21:20:46.596093 containerd[1592]: time="2026-01-16T21:20:46.595929576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-dgkw9,Uid:91f2e5a0-0976-4b7a-ac63-530715dff408,Namespace:calico-system,Attempt:0,}" Jan 16 21:20:47.797164 containerd[1592]: time="2026-01-16T21:20:47.797014448Z" level=error msg="Failed to destroy network for sandbox \"b05b212d74927114d63b9cb9d8cef9fb7433b49dc5c41f6fd7ad801346dffca2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:47.813904 systemd[1]: run-netns-cni\x2d19c6c043\x2d1230\x2d5eb3\x2dacda\x2d612f80420986.mount: Deactivated successfully. Jan 16 21:20:47.870809 containerd[1592]: time="2026-01-16T21:20:47.870670658Z" level=error msg="Failed to destroy network for sandbox \"2e7292d45f20fd18432567172383be9654e6026727c79e80792d162f641d4be4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:47.889417 containerd[1592]: time="2026-01-16T21:20:47.876783535Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6hngd,Uid:b99000d7-a136-4299-82d0-76fa7e3c28f2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b05b212d74927114d63b9cb9d8cef9fb7433b49dc5c41f6fd7ad801346dffca2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:47.886468 systemd[1]: run-netns-cni\x2d4d1c4a36\x2da82a\x2de93e\x2d6393\x2d610f9afb0ee7.mount: Deactivated successfully. Jan 16 21:20:47.890547 kubelet[2889]: E0116 21:20:47.879453 2889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b05b212d74927114d63b9cb9d8cef9fb7433b49dc5c41f6fd7ad801346dffca2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:47.890547 kubelet[2889]: E0116 21:20:47.879739 2889 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b05b212d74927114d63b9cb9d8cef9fb7433b49dc5c41f6fd7ad801346dffca2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6hngd" Jan 16 21:20:47.890547 kubelet[2889]: E0116 21:20:47.879775 2889 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b05b212d74927114d63b9cb9d8cef9fb7433b49dc5c41f6fd7ad801346dffca2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6hngd" Jan 16 21:20:47.891100 kubelet[2889]: E0116 21:20:47.879838 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6hngd_calico-system(b99000d7-a136-4299-82d0-76fa7e3c28f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6hngd_calico-system(b99000d7-a136-4299-82d0-76fa7e3c28f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b05b212d74927114d63b9cb9d8cef9fb7433b49dc5c41f6fd7ad801346dffca2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:20:47.912794 containerd[1592]: time="2026-01-16T21:20:47.912464994Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-s5mq8,Uid:0752a7cf-5d05-4249-a029-fa96ed25d7e6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e7292d45f20fd18432567172383be9654e6026727c79e80792d162f641d4be4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:47.915915 kubelet[2889]: E0116 21:20:47.915449 2889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e7292d45f20fd18432567172383be9654e6026727c79e80792d162f641d4be4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:47.916153 kubelet[2889]: E0116 21:20:47.916129 2889 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e7292d45f20fd18432567172383be9654e6026727c79e80792d162f641d4be4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-s5mq8" Jan 16 21:20:47.916224 kubelet[2889]: E0116 21:20:47.916209 2889 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e7292d45f20fd18432567172383be9654e6026727c79e80792d162f641d4be4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-s5mq8" Jan 16 21:20:47.916697 kubelet[2889]: E0116 21:20:47.916669 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-s5mq8_kube-system(0752a7cf-5d05-4249-a029-fa96ed25d7e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-s5mq8_kube-system(0752a7cf-5d05-4249-a029-fa96ed25d7e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e7292d45f20fd18432567172383be9654e6026727c79e80792d162f641d4be4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-s5mq8" podUID="0752a7cf-5d05-4249-a029-fa96ed25d7e6" Jan 16 21:20:47.926443 containerd[1592]: time="2026-01-16T21:20:47.923073905Z" level=error msg="Failed to destroy network for sandbox \"c0b90b73e33afd2fc254660dcd8ec063a4558d3eef00a5bfc97db095e63326b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:47.933185 systemd[1]: run-netns-cni\x2d66833aa5\x2db0ce\x2db9bc\x2da2d3\x2d1c8bd43c1ffc.mount: Deactivated successfully. Jan 16 21:20:47.960100 containerd[1592]: time="2026-01-16T21:20:47.960034208Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c7c579b5f-dsr85,Uid:f80ed623-af1a-45e7-a125-0c7c2229f592,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0b90b73e33afd2fc254660dcd8ec063a4558d3eef00a5bfc97db095e63326b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:47.962967 kubelet[2889]: E0116 21:20:47.962151 2889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0b90b73e33afd2fc254660dcd8ec063a4558d3eef00a5bfc97db095e63326b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:47.963719 kubelet[2889]: E0116 21:20:47.963099 2889 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0b90b73e33afd2fc254660dcd8ec063a4558d3eef00a5bfc97db095e63326b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" Jan 16 21:20:47.963719 kubelet[2889]: E0116 21:20:47.963232 2889 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0b90b73e33afd2fc254660dcd8ec063a4558d3eef00a5bfc97db095e63326b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" Jan 16 21:20:47.967764 kubelet[2889]: E0116 21:20:47.967013 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c7c579b5f-dsr85_calico-system(f80ed623-af1a-45e7-a125-0c7c2229f592)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c7c579b5f-dsr85_calico-system(f80ed623-af1a-45e7-a125-0c7c2229f592)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c0b90b73e33afd2fc254660dcd8ec063a4558d3eef00a5bfc97db095e63326b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" podUID="f80ed623-af1a-45e7-a125-0c7c2229f592" Jan 16 21:20:47.973777 containerd[1592]: time="2026-01-16T21:20:47.973175956Z" level=error msg="Failed to destroy network for sandbox \"0a2512c1e1f3ff56858eb63ad2112101c9088926fcddfef942938b8781ee17bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:47.979895 systemd[1]: run-netns-cni\x2dfb9bf046\x2d8de8\x2d3c6e\x2d365d\x2dc5c92aec2624.mount: Deactivated successfully. Jan 16 21:20:47.995464 containerd[1592]: time="2026-01-16T21:20:47.995033464Z" level=error msg="Failed to destroy network for sandbox \"ddf56e44576cf6af3304d4f632d5321fdf04c74b2f1da67579aabce62626daf6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:48.001148 systemd[1]: run-netns-cni\x2d67fe9264\x2df89b\x2db0f0\x2dcac8\x2d96a0313f73c8.mount: Deactivated successfully. Jan 16 21:20:48.013097 containerd[1592]: time="2026-01-16T21:20:48.013041590Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cb5984987-nz6bj,Uid:ac06912f-e290-4031-a848-1392298fa9de,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a2512c1e1f3ff56858eb63ad2112101c9088926fcddfef942938b8781ee17bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:48.018818 kubelet[2889]: E0116 21:20:48.018117 2889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a2512c1e1f3ff56858eb63ad2112101c9088926fcddfef942938b8781ee17bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:48.018818 kubelet[2889]: E0116 21:20:48.018191 2889 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a2512c1e1f3ff56858eb63ad2112101c9088926fcddfef942938b8781ee17bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" Jan 16 21:20:48.018818 kubelet[2889]: E0116 21:20:48.018220 2889 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a2512c1e1f3ff56858eb63ad2112101c9088926fcddfef942938b8781ee17bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" Jan 16 21:20:48.019510 kubelet[2889]: E0116 21:20:48.018466 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cb5984987-nz6bj_calico-apiserver(ac06912f-e290-4031-a848-1392298fa9de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cb5984987-nz6bj_calico-apiserver(ac06912f-e290-4031-a848-1392298fa9de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a2512c1e1f3ff56858eb63ad2112101c9088926fcddfef942938b8781ee17bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" podUID="ac06912f-e290-4031-a848-1392298fa9de" Jan 16 21:20:48.019776 containerd[1592]: time="2026-01-16T21:20:48.019249974Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cb5984987-cb6w2,Uid:7e12a227-9190-436c-a55f-74274779eb32,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddf56e44576cf6af3304d4f632d5321fdf04c74b2f1da67579aabce62626daf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:48.019990 kubelet[2889]: E0116 21:20:48.019699 2889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddf56e44576cf6af3304d4f632d5321fdf04c74b2f1da67579aabce62626daf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:48.019990 kubelet[2889]: E0116 21:20:48.019737 2889 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddf56e44576cf6af3304d4f632d5321fdf04c74b2f1da67579aabce62626daf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" Jan 16 21:20:48.019990 kubelet[2889]: E0116 21:20:48.019764 2889 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddf56e44576cf6af3304d4f632d5321fdf04c74b2f1da67579aabce62626daf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" Jan 16 21:20:48.020104 kubelet[2889]: E0116 21:20:48.019959 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cb5984987-cb6w2_calico-apiserver(7e12a227-9190-436c-a55f-74274779eb32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cb5984987-cb6w2_calico-apiserver(7e12a227-9190-436c-a55f-74274779eb32)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ddf56e44576cf6af3304d4f632d5321fdf04c74b2f1da67579aabce62626daf6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" podUID="7e12a227-9190-436c-a55f-74274779eb32" Jan 16 21:20:48.029396 containerd[1592]: time="2026-01-16T21:20:48.028752422Z" level=error msg="Failed to destroy network for sandbox \"c8a7356898c4f9c17fed58d3978fe968e753a2992fedaa0047915b2f07d892e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:48.099446 containerd[1592]: time="2026-01-16T21:20:48.096688899Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-j4h6j,Uid:551adc74-4bae-43fc-8b67-656cbc70d543,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8a7356898c4f9c17fed58d3978fe968e753a2992fedaa0047915b2f07d892e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:48.100222 kubelet[2889]: E0116 21:20:48.097504 2889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8a7356898c4f9c17fed58d3978fe968e753a2992fedaa0047915b2f07d892e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:48.100222 kubelet[2889]: E0116 21:20:48.097678 2889 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8a7356898c4f9c17fed58d3978fe968e753a2992fedaa0047915b2f07d892e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-j4h6j" Jan 16 21:20:48.100222 kubelet[2889]: E0116 21:20:48.097712 2889 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8a7356898c4f9c17fed58d3978fe968e753a2992fedaa0047915b2f07d892e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-j4h6j" Jan 16 21:20:48.102734 kubelet[2889]: E0116 21:20:48.097771 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-j4h6j_kube-system(551adc74-4bae-43fc-8b67-656cbc70d543)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-j4h6j_kube-system(551adc74-4bae-43fc-8b67-656cbc70d543)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8a7356898c4f9c17fed58d3978fe968e753a2992fedaa0047915b2f07d892e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-j4h6j" podUID="551adc74-4bae-43fc-8b67-656cbc70d543" Jan 16 21:20:48.105989 containerd[1592]: time="2026-01-16T21:20:48.105697658Z" level=error msg="Failed to destroy network for sandbox \"7dda3651363d299611d76fbb651b39c46f212b54cab8d9be3817684b606e537f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:48.107366 containerd[1592]: time="2026-01-16T21:20:48.107207698Z" level=error msg="Failed to destroy network for sandbox \"5fe83102c80fecff3b627524d0c64b995c9d0193f59224c0c5c514e911dca229\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:48.148467 containerd[1592]: time="2026-01-16T21:20:48.145159330Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-dgkw9,Uid:91f2e5a0-0976-4b7a-ac63-530715dff408,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dda3651363d299611d76fbb651b39c46f212b54cab8d9be3817684b606e537f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:48.152829 kubelet[2889]: E0116 21:20:48.149215 2889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dda3651363d299611d76fbb651b39c46f212b54cab8d9be3817684b606e537f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:48.152829 kubelet[2889]: E0116 21:20:48.150924 2889 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dda3651363d299611d76fbb651b39c46f212b54cab8d9be3817684b606e537f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-dgkw9" Jan 16 21:20:48.152829 kubelet[2889]: E0116 21:20:48.151023 2889 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dda3651363d299611d76fbb651b39c46f212b54cab8d9be3817684b606e537f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-dgkw9" Jan 16 21:20:48.153119 kubelet[2889]: E0116 21:20:48.151453 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-dgkw9_calico-system(91f2e5a0-0976-4b7a-ac63-530715dff408)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-dgkw9_calico-system(91f2e5a0-0976-4b7a-ac63-530715dff408)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7dda3651363d299611d76fbb651b39c46f212b54cab8d9be3817684b606e537f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-dgkw9" podUID="91f2e5a0-0976-4b7a-ac63-530715dff408" Jan 16 21:20:48.153803 containerd[1592]: time="2026-01-16T21:20:48.153212358Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b9cdf6556-bhdzz,Uid:9bfca5e6-567f-4d15-9e56-b6d59ebbc722,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fe83102c80fecff3b627524d0c64b995c9d0193f59224c0c5c514e911dca229\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:48.154247 kubelet[2889]: E0116 21:20:48.153883 2889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fe83102c80fecff3b627524d0c64b995c9d0193f59224c0c5c514e911dca229\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:48.154247 kubelet[2889]: E0116 21:20:48.153938 2889 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fe83102c80fecff3b627524d0c64b995c9d0193f59224c0c5c514e911dca229\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b9cdf6556-bhdzz" Jan 16 21:20:48.154247 kubelet[2889]: E0116 21:20:48.153963 2889 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fe83102c80fecff3b627524d0c64b995c9d0193f59224c0c5c514e911dca229\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b9cdf6556-bhdzz" Jan 16 21:20:48.154736 kubelet[2889]: E0116 21:20:48.154074 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7b9cdf6556-bhdzz_calico-system(9bfca5e6-567f-4d15-9e56-b6d59ebbc722)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7b9cdf6556-bhdzz_calico-system(9bfca5e6-567f-4d15-9e56-b6d59ebbc722)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5fe83102c80fecff3b627524d0c64b995c9d0193f59224c0c5c514e911dca229\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7b9cdf6556-bhdzz" podUID="9bfca5e6-567f-4d15-9e56-b6d59ebbc722" Jan 16 21:20:48.805985 systemd[1]: run-netns-cni\x2d89b6c067\x2d9f78\x2d8b53\x2d11eb\x2d67f5f89d9ea0.mount: Deactivated successfully. Jan 16 21:20:48.807014 systemd[1]: run-netns-cni\x2d632afa3b\x2dc5ef\x2d4581\x2d3bd9\x2d0f79f211761c.mount: Deactivated successfully. Jan 16 21:20:48.807199 systemd[1]: run-netns-cni\x2d4ebdebd2\x2d96bd\x2dfdb7\x2dc2a2\x2ddf4c759d3c6e.mount: Deactivated successfully. Jan 16 21:20:59.196192 containerd[1592]: time="2026-01-16T21:20:59.193784010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-dgkw9,Uid:91f2e5a0-0976-4b7a-ac63-530715dff408,Namespace:calico-system,Attempt:0,}" Jan 16 21:20:59.596667 containerd[1592]: time="2026-01-16T21:20:59.596144667Z" level=error msg="Failed to destroy network for sandbox \"c26a2097f25eece58735e802b1afc83d963cd2ab8e98c592f8dd8b9f4d61d5cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:59.603095 systemd[1]: run-netns-cni\x2dbabae086\x2d5b1d\x2d0e33\x2d46fa\x2d775d28d67c27.mount: Deactivated successfully. Jan 16 21:20:59.619962 containerd[1592]: time="2026-01-16T21:20:59.619112363Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-dgkw9,Uid:91f2e5a0-0976-4b7a-ac63-530715dff408,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c26a2097f25eece58735e802b1afc83d963cd2ab8e98c592f8dd8b9f4d61d5cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:59.625139 kubelet[2889]: E0116 21:20:59.621150 2889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c26a2097f25eece58735e802b1afc83d963cd2ab8e98c592f8dd8b9f4d61d5cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:20:59.625139 kubelet[2889]: E0116 21:20:59.624695 2889 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c26a2097f25eece58735e802b1afc83d963cd2ab8e98c592f8dd8b9f4d61d5cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-dgkw9" Jan 16 21:20:59.625139 kubelet[2889]: E0116 21:20:59.624729 2889 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c26a2097f25eece58735e802b1afc83d963cd2ab8e98c592f8dd8b9f4d61d5cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-dgkw9" Jan 16 21:20:59.626807 kubelet[2889]: E0116 21:20:59.626752 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-dgkw9_calico-system(91f2e5a0-0976-4b7a-ac63-530715dff408)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-dgkw9_calico-system(91f2e5a0-0976-4b7a-ac63-530715dff408)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c26a2097f25eece58735e802b1afc83d963cd2ab8e98c592f8dd8b9f4d61d5cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-dgkw9" podUID="91f2e5a0-0976-4b7a-ac63-530715dff408" Jan 16 21:21:00.078898 kubelet[2889]: E0116 21:21:00.076994 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:00.086107 containerd[1592]: time="2026-01-16T21:21:00.085812616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-j4h6j,Uid:551adc74-4bae-43fc-8b67-656cbc70d543,Namespace:kube-system,Attempt:0,}" Jan 16 21:21:00.097089 containerd[1592]: time="2026-01-16T21:21:00.097037662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cb5984987-nz6bj,Uid:ac06912f-e290-4031-a848-1392298fa9de,Namespace:calico-apiserver,Attempt:0,}" Jan 16 21:21:00.097205 containerd[1592]: time="2026-01-16T21:21:00.097140294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b9cdf6556-bhdzz,Uid:9bfca5e6-567f-4d15-9e56-b6d59ebbc722,Namespace:calico-system,Attempt:0,}" Jan 16 21:21:00.100820 containerd[1592]: time="2026-01-16T21:21:00.100790110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c7c579b5f-dsr85,Uid:f80ed623-af1a-45e7-a125-0c7c2229f592,Namespace:calico-system,Attempt:0,}" Jan 16 21:21:00.694250 containerd[1592]: time="2026-01-16T21:21:00.693986596Z" level=error msg="Failed to destroy network for sandbox \"551c33b435ffd6846c3fb767a29eb34b9d38ee693e6e5a532c9e55167fb5a832\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:00.703943 systemd[1]: run-netns-cni\x2daed0a10e\x2d71dd\x2d4507\x2d13d9\x2dfefd8db21c0b.mount: Deactivated successfully. Jan 16 21:21:00.718405 containerd[1592]: time="2026-01-16T21:21:00.717851736Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cb5984987-nz6bj,Uid:ac06912f-e290-4031-a848-1392298fa9de,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"551c33b435ffd6846c3fb767a29eb34b9d38ee693e6e5a532c9e55167fb5a832\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:00.721364 kubelet[2889]: E0116 21:21:00.719237 2889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"551c33b435ffd6846c3fb767a29eb34b9d38ee693e6e5a532c9e55167fb5a832\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:00.721364 kubelet[2889]: E0116 21:21:00.720755 2889 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"551c33b435ffd6846c3fb767a29eb34b9d38ee693e6e5a532c9e55167fb5a832\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" Jan 16 21:21:00.721364 kubelet[2889]: E0116 21:21:00.720788 2889 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"551c33b435ffd6846c3fb767a29eb34b9d38ee693e6e5a532c9e55167fb5a832\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" Jan 16 21:21:00.722132 kubelet[2889]: E0116 21:21:00.720868 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cb5984987-nz6bj_calico-apiserver(ac06912f-e290-4031-a848-1392298fa9de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cb5984987-nz6bj_calico-apiserver(ac06912f-e290-4031-a848-1392298fa9de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"551c33b435ffd6846c3fb767a29eb34b9d38ee693e6e5a532c9e55167fb5a832\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" podUID="ac06912f-e290-4031-a848-1392298fa9de" Jan 16 21:21:00.729766 containerd[1592]: time="2026-01-16T21:21:00.729720001Z" level=error msg="Failed to destroy network for sandbox \"a005a60967bf7c601e2a1f344c5aea0953aa63cea6a74c49daad8baad1d1bacd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:00.736536 systemd[1]: run-netns-cni\x2d4edd8995\x2dfa37\x2d8ea4\x2d8bcb\x2d8330f1c99ed3.mount: Deactivated successfully. Jan 16 21:21:00.747547 containerd[1592]: time="2026-01-16T21:21:00.747050407Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-j4h6j,Uid:551adc74-4bae-43fc-8b67-656cbc70d543,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a005a60967bf7c601e2a1f344c5aea0953aa63cea6a74c49daad8baad1d1bacd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:00.749708 kubelet[2889]: E0116 21:21:00.749464 2889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a005a60967bf7c601e2a1f344c5aea0953aa63cea6a74c49daad8baad1d1bacd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:00.749815 kubelet[2889]: E0116 21:21:00.749738 2889 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a005a60967bf7c601e2a1f344c5aea0953aa63cea6a74c49daad8baad1d1bacd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-j4h6j" Jan 16 21:21:00.749815 kubelet[2889]: E0116 21:21:00.749772 2889 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a005a60967bf7c601e2a1f344c5aea0953aa63cea6a74c49daad8baad1d1bacd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-j4h6j" Jan 16 21:21:00.749912 kubelet[2889]: E0116 21:21:00.749839 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-j4h6j_kube-system(551adc74-4bae-43fc-8b67-656cbc70d543)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-j4h6j_kube-system(551adc74-4bae-43fc-8b67-656cbc70d543)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a005a60967bf7c601e2a1f344c5aea0953aa63cea6a74c49daad8baad1d1bacd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-j4h6j" podUID="551adc74-4bae-43fc-8b67-656cbc70d543" Jan 16 21:21:00.812479 containerd[1592]: time="2026-01-16T21:21:00.810224265Z" level=error msg="Failed to destroy network for sandbox \"da52a4f8522be74fc6fb097044d91934d49b0979e9bf0277193c107d0dafe455\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:00.827190 systemd[1]: run-netns-cni\x2d2fa4b55a\x2d99aa\x2d53b7\x2d4425\x2d73c696fd6a26.mount: Deactivated successfully. Jan 16 21:21:00.837776 containerd[1592]: time="2026-01-16T21:21:00.837713294Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b9cdf6556-bhdzz,Uid:9bfca5e6-567f-4d15-9e56-b6d59ebbc722,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"da52a4f8522be74fc6fb097044d91934d49b0979e9bf0277193c107d0dafe455\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:00.842551 kubelet[2889]: E0116 21:21:00.841539 2889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da52a4f8522be74fc6fb097044d91934d49b0979e9bf0277193c107d0dafe455\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:00.842551 kubelet[2889]: E0116 21:21:00.841828 2889 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da52a4f8522be74fc6fb097044d91934d49b0979e9bf0277193c107d0dafe455\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b9cdf6556-bhdzz" Jan 16 21:21:00.842551 kubelet[2889]: E0116 21:21:00.841858 2889 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da52a4f8522be74fc6fb097044d91934d49b0979e9bf0277193c107d0dafe455\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b9cdf6556-bhdzz" Jan 16 21:21:00.842972 kubelet[2889]: E0116 21:21:00.841919 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7b9cdf6556-bhdzz_calico-system(9bfca5e6-567f-4d15-9e56-b6d59ebbc722)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7b9cdf6556-bhdzz_calico-system(9bfca5e6-567f-4d15-9e56-b6d59ebbc722)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da52a4f8522be74fc6fb097044d91934d49b0979e9bf0277193c107d0dafe455\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7b9cdf6556-bhdzz" podUID="9bfca5e6-567f-4d15-9e56-b6d59ebbc722" Jan 16 21:21:00.932558 containerd[1592]: time="2026-01-16T21:21:00.932499701Z" level=error msg="Failed to destroy network for sandbox \"8661900e10482a22ddd61875bd33b94e4edc12b64b23450d78ec70426e70762c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:00.940201 systemd[1]: run-netns-cni\x2dc02c6e2b\x2dc431\x2d6af5\x2d16d5\x2dcd21ceac68a1.mount: Deactivated successfully. Jan 16 21:21:00.962175 containerd[1592]: time="2026-01-16T21:21:00.952444808Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c7c579b5f-dsr85,Uid:f80ed623-af1a-45e7-a125-0c7c2229f592,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8661900e10482a22ddd61875bd33b94e4edc12b64b23450d78ec70426e70762c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:00.968227 kubelet[2889]: E0116 21:21:00.965034 2889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8661900e10482a22ddd61875bd33b94e4edc12b64b23450d78ec70426e70762c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:00.968227 kubelet[2889]: E0116 21:21:00.965114 2889 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8661900e10482a22ddd61875bd33b94e4edc12b64b23450d78ec70426e70762c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" Jan 16 21:21:00.968227 kubelet[2889]: E0116 21:21:00.965143 2889 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8661900e10482a22ddd61875bd33b94e4edc12b64b23450d78ec70426e70762c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" Jan 16 21:21:00.968821 kubelet[2889]: E0116 21:21:00.965210 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c7c579b5f-dsr85_calico-system(f80ed623-af1a-45e7-a125-0c7c2229f592)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c7c579b5f-dsr85_calico-system(f80ed623-af1a-45e7-a125-0c7c2229f592)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8661900e10482a22ddd61875bd33b94e4edc12b64b23450d78ec70426e70762c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" podUID="f80ed623-af1a-45e7-a125-0c7c2229f592" Jan 16 21:21:02.133676 containerd[1592]: time="2026-01-16T21:21:02.131068687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cb5984987-cb6w2,Uid:7e12a227-9190-436c-a55f-74274779eb32,Namespace:calico-apiserver,Attempt:0,}" Jan 16 21:21:02.139158 kubelet[2889]: E0116 21:21:02.139104 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:02.146065 containerd[1592]: time="2026-01-16T21:21:02.143229047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-s5mq8,Uid:0752a7cf-5d05-4249-a029-fa96ed25d7e6,Namespace:kube-system,Attempt:0,}" Jan 16 21:21:02.704075 containerd[1592]: time="2026-01-16T21:21:02.702107676Z" level=error msg="Failed to destroy network for sandbox \"4988d18d490eedb93f3600f1fb68d65f69f02b92463fea88229bf32fd80de248\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:02.708051 containerd[1592]: time="2026-01-16T21:21:02.707050391Z" level=error msg="Failed to destroy network for sandbox \"046ada5a562d9f25a79c6d26bcc3a488d512dcf867b9097542bb5f5606a89f42\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:02.709691 systemd[1]: run-netns-cni\x2da4ae44bb\x2d874d\x2d1425\x2de40d\x2dcb4198612e3d.mount: Deactivated successfully. Jan 16 21:21:02.718106 systemd[1]: run-netns-cni\x2d1f9e95a3\x2d26b0\x2df899\x2d407b\x2d293c2fe1d572.mount: Deactivated successfully. Jan 16 21:21:02.728440 containerd[1592]: time="2026-01-16T21:21:02.726076875Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-s5mq8,Uid:0752a7cf-5d05-4249-a029-fa96ed25d7e6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"046ada5a562d9f25a79c6d26bcc3a488d512dcf867b9097542bb5f5606a89f42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:02.729220 kubelet[2889]: E0116 21:21:02.726842 2889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"046ada5a562d9f25a79c6d26bcc3a488d512dcf867b9097542bb5f5606a89f42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:02.729220 kubelet[2889]: E0116 21:21:02.726987 2889 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"046ada5a562d9f25a79c6d26bcc3a488d512dcf867b9097542bb5f5606a89f42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-s5mq8" Jan 16 21:21:02.729220 kubelet[2889]: E0116 21:21:02.727011 2889 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"046ada5a562d9f25a79c6d26bcc3a488d512dcf867b9097542bb5f5606a89f42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-s5mq8" Jan 16 21:21:02.731178 kubelet[2889]: E0116 21:21:02.727174 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-s5mq8_kube-system(0752a7cf-5d05-4249-a029-fa96ed25d7e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-s5mq8_kube-system(0752a7cf-5d05-4249-a029-fa96ed25d7e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"046ada5a562d9f25a79c6d26bcc3a488d512dcf867b9097542bb5f5606a89f42\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-s5mq8" podUID="0752a7cf-5d05-4249-a029-fa96ed25d7e6" Jan 16 21:21:02.731178 kubelet[2889]: E0116 21:21:02.730557 2889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4988d18d490eedb93f3600f1fb68d65f69f02b92463fea88229bf32fd80de248\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:02.731178 kubelet[2889]: E0116 21:21:02.730740 2889 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4988d18d490eedb93f3600f1fb68d65f69f02b92463fea88229bf32fd80de248\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" Jan 16 21:21:02.732514 containerd[1592]: time="2026-01-16T21:21:02.729475981Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cb5984987-cb6w2,Uid:7e12a227-9190-436c-a55f-74274779eb32,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4988d18d490eedb93f3600f1fb68d65f69f02b92463fea88229bf32fd80de248\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:02.733444 kubelet[2889]: E0116 21:21:02.730763 2889 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4988d18d490eedb93f3600f1fb68d65f69f02b92463fea88229bf32fd80de248\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" Jan 16 21:21:02.733444 kubelet[2889]: E0116 21:21:02.730827 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cb5984987-cb6w2_calico-apiserver(7e12a227-9190-436c-a55f-74274779eb32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cb5984987-cb6w2_calico-apiserver(7e12a227-9190-436c-a55f-74274779eb32)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4988d18d490eedb93f3600f1fb68d65f69f02b92463fea88229bf32fd80de248\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" podUID="7e12a227-9190-436c-a55f-74274779eb32" Jan 16 21:21:03.073486 containerd[1592]: time="2026-01-16T21:21:03.071224729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6hngd,Uid:b99000d7-a136-4299-82d0-76fa7e3c28f2,Namespace:calico-system,Attempt:0,}" Jan 16 21:21:03.381673 containerd[1592]: time="2026-01-16T21:21:03.381016295Z" level=error msg="Failed to destroy network for sandbox \"9ad293d6b43c205cd63dfee42ea2004c4386edcb92a2fdf3baa5f08b037d1dd2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:03.387222 systemd[1]: run-netns-cni\x2d5e2b9850\x2da7a1\x2d23c4\x2d8381\x2d37984f93efaf.mount: Deactivated successfully. Jan 16 21:21:03.402731 containerd[1592]: time="2026-01-16T21:21:03.401949245Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6hngd,Uid:b99000d7-a136-4299-82d0-76fa7e3c28f2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ad293d6b43c205cd63dfee42ea2004c4386edcb92a2fdf3baa5f08b037d1dd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:03.409671 kubelet[2889]: E0116 21:21:03.406060 2889 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ad293d6b43c205cd63dfee42ea2004c4386edcb92a2fdf3baa5f08b037d1dd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:03.409671 kubelet[2889]: E0116 21:21:03.406988 2889 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ad293d6b43c205cd63dfee42ea2004c4386edcb92a2fdf3baa5f08b037d1dd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6hngd" Jan 16 21:21:03.409671 kubelet[2889]: E0116 21:21:03.407151 2889 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ad293d6b43c205cd63dfee42ea2004c4386edcb92a2fdf3baa5f08b037d1dd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6hngd" Jan 16 21:21:03.413939 kubelet[2889]: E0116 21:21:03.408772 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6hngd_calico-system(b99000d7-a136-4299-82d0-76fa7e3c28f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6hngd_calico-system(b99000d7-a136-4299-82d0-76fa7e3c28f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ad293d6b43c205cd63dfee42ea2004c4386edcb92a2fdf3baa5f08b037d1dd2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:21:04.130001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1246792699.mount: Deactivated successfully. Jan 16 21:21:04.259462 containerd[1592]: time="2026-01-16T21:21:04.256208504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:21:04.261493 containerd[1592]: time="2026-01-16T21:21:04.261158008Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 16 21:21:04.267119 containerd[1592]: time="2026-01-16T21:21:04.266886773Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:21:04.274532 containerd[1592]: time="2026-01-16T21:21:04.274413672Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:21:04.282103 containerd[1592]: time="2026-01-16T21:21:04.281125023Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 39.402193034s" Jan 16 21:21:04.282103 containerd[1592]: time="2026-01-16T21:21:04.281514499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 16 21:21:04.360814 containerd[1592]: time="2026-01-16T21:21:04.360191156Z" level=info msg="CreateContainer within sandbox \"93dfabfdf599f3d361287516af3964d525dc81d71338d6dfc4d20254c4f10092\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 16 21:21:04.457914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1003634742.mount: Deactivated successfully. Jan 16 21:21:04.459995 containerd[1592]: time="2026-01-16T21:21:04.459889712Z" level=info msg="Container 4e7aa1694340ec2779801715598738b2ddbefc9f931b085401cce2bf040b8866: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:21:04.515823 containerd[1592]: time="2026-01-16T21:21:04.515249791Z" level=info msg="CreateContainer within sandbox \"93dfabfdf599f3d361287516af3964d525dc81d71338d6dfc4d20254c4f10092\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4e7aa1694340ec2779801715598738b2ddbefc9f931b085401cce2bf040b8866\"" Jan 16 21:21:04.518146 containerd[1592]: time="2026-01-16T21:21:04.517925205Z" level=info msg="StartContainer for \"4e7aa1694340ec2779801715598738b2ddbefc9f931b085401cce2bf040b8866\"" Jan 16 21:21:04.526525 containerd[1592]: time="2026-01-16T21:21:04.525917671Z" level=info msg="connecting to shim 4e7aa1694340ec2779801715598738b2ddbefc9f931b085401cce2bf040b8866" address="unix:///run/containerd/s/845e5825ed337eaa9272fdac0a609c1128408f580483388cca82c41ba5e13712" protocol=ttrpc version=3 Jan 16 21:21:04.724030 systemd[1]: Started cri-containerd-4e7aa1694340ec2779801715598738b2ddbefc9f931b085401cce2bf040b8866.scope - libcontainer container 4e7aa1694340ec2779801715598738b2ddbefc9f931b085401cce2bf040b8866. Jan 16 21:21:04.951000 audit: BPF prog-id=172 op=LOAD Jan 16 21:21:04.951000 audit[4483]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3426 pid=4483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:05.024077 kernel: audit: type=1334 audit(1768598464.951:568): prog-id=172 op=LOAD Jan 16 21:21:05.026740 kernel: audit: type=1300 audit(1768598464.951:568): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3426 pid=4483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:05.026817 kernel: audit: type=1327 audit(1768598464.951:568): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465376161313639343334306563323737393830313731353539383733 Jan 16 21:21:04.951000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465376161313639343334306563323737393830313731353539383733 Jan 16 21:21:04.953000 audit: BPF prog-id=173 op=LOAD Jan 16 21:21:05.077126 kernel: audit: type=1334 audit(1768598464.953:569): prog-id=173 op=LOAD Jan 16 21:21:05.087551 kernel: audit: type=1300 audit(1768598464.953:569): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3426 pid=4483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:04.953000 audit[4483]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3426 pid=4483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:04.953000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465376161313639343334306563323737393830313731353539383733 Jan 16 21:21:04.953000 audit: BPF prog-id=173 op=UNLOAD Jan 16 21:21:05.181095 kernel: audit: type=1327 audit(1768598464.953:569): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465376161313639343334306563323737393830313731353539383733 Jan 16 21:21:05.181195 kernel: audit: type=1334 audit(1768598464.953:570): prog-id=173 op=UNLOAD Jan 16 21:21:04.953000 audit[4483]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3426 pid=4483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:04.953000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465376161313639343334306563323737393830313731353539383733 Jan 16 21:21:05.306192 kernel: audit: type=1300 audit(1768598464.953:570): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3426 pid=4483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:05.306532 kernel: audit: type=1327 audit(1768598464.953:570): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465376161313639343334306563323737393830313731353539383733 Jan 16 21:21:05.306712 kernel: audit: type=1334 audit(1768598464.953:571): prog-id=172 op=UNLOAD Jan 16 21:21:04.953000 audit: BPF prog-id=172 op=UNLOAD Jan 16 21:21:04.953000 audit[4483]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3426 pid=4483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:04.953000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465376161313639343334306563323737393830313731353539383733 Jan 16 21:21:04.953000 audit: BPF prog-id=174 op=LOAD Jan 16 21:21:04.953000 audit[4483]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3426 pid=4483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:04.953000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465376161313639343334306563323737393830313731353539383733 Jan 16 21:21:05.359147 containerd[1592]: time="2026-01-16T21:21:05.359020048Z" level=info msg="StartContainer for \"4e7aa1694340ec2779801715598738b2ddbefc9f931b085401cce2bf040b8866\" returns successfully" Jan 16 21:21:05.641941 kubelet[2889]: E0116 21:21:05.627903 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:05.717556 kubelet[2889]: I0116 21:21:05.717203 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dmn8q" podStartSLOduration=3.275791339 podStartE2EDuration="1m12.717082079s" podCreationTimestamp="2026-01-16 21:19:53 +0000 UTC" firstStartedPulling="2026-01-16 21:19:54.843781355 +0000 UTC m=+59.402084316" lastFinishedPulling="2026-01-16 21:21:04.285072095 +0000 UTC m=+128.843375056" observedRunningTime="2026-01-16 21:21:05.714828622 +0000 UTC m=+130.273131583" watchObservedRunningTime="2026-01-16 21:21:05.717082079 +0000 UTC m=+130.275385040" Jan 16 21:21:06.094054 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 16 21:21:06.094214 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 16 21:21:06.638525 kubelet[2889]: E0116 21:21:06.636470 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:07.081876 kubelet[2889]: I0116 21:21:07.081728 2889 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6f7h\" (UniqueName: \"kubernetes.io/projected/9bfca5e6-567f-4d15-9e56-b6d59ebbc722-kube-api-access-h6f7h\") pod \"9bfca5e6-567f-4d15-9e56-b6d59ebbc722\" (UID: \"9bfca5e6-567f-4d15-9e56-b6d59ebbc722\") " Jan 16 21:21:07.082706 kubelet[2889]: I0116 21:21:07.081905 2889 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bfca5e6-567f-4d15-9e56-b6d59ebbc722-whisker-ca-bundle\") pod \"9bfca5e6-567f-4d15-9e56-b6d59ebbc722\" (UID: \"9bfca5e6-567f-4d15-9e56-b6d59ebbc722\") " Jan 16 21:21:07.082706 kubelet[2889]: I0116 21:21:07.081938 2889 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9bfca5e6-567f-4d15-9e56-b6d59ebbc722-whisker-backend-key-pair\") pod \"9bfca5e6-567f-4d15-9e56-b6d59ebbc722\" (UID: \"9bfca5e6-567f-4d15-9e56-b6d59ebbc722\") " Jan 16 21:21:07.086014 kubelet[2889]: I0116 21:21:07.085092 2889 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bfca5e6-567f-4d15-9e56-b6d59ebbc722-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9bfca5e6-567f-4d15-9e56-b6d59ebbc722" (UID: "9bfca5e6-567f-4d15-9e56-b6d59ebbc722"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 16 21:21:07.106526 kubelet[2889]: I0116 21:21:07.105694 2889 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bfca5e6-567f-4d15-9e56-b6d59ebbc722-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9bfca5e6-567f-4d15-9e56-b6d59ebbc722" (UID: "9bfca5e6-567f-4d15-9e56-b6d59ebbc722"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 16 21:21:07.106192 systemd[1]: var-lib-kubelet-pods-9bfca5e6\x2d567f\x2d4d15\x2d9e56\x2db6d59ebbc722-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 16 21:21:07.123443 systemd[1]: var-lib-kubelet-pods-9bfca5e6\x2d567f\x2d4d15\x2d9e56\x2db6d59ebbc722-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh6f7h.mount: Deactivated successfully. Jan 16 21:21:07.128812 kubelet[2889]: I0116 21:21:07.128067 2889 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bfca5e6-567f-4d15-9e56-b6d59ebbc722-kube-api-access-h6f7h" (OuterVolumeSpecName: "kube-api-access-h6f7h") pod "9bfca5e6-567f-4d15-9e56-b6d59ebbc722" (UID: "9bfca5e6-567f-4d15-9e56-b6d59ebbc722"). InnerVolumeSpecName "kube-api-access-h6f7h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 16 21:21:07.184189 kubelet[2889]: I0116 21:21:07.183952 2889 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h6f7h\" (UniqueName: \"kubernetes.io/projected/9bfca5e6-567f-4d15-9e56-b6d59ebbc722-kube-api-access-h6f7h\") on node \"localhost\" DevicePath \"\"" Jan 16 21:21:07.184189 kubelet[2889]: I0116 21:21:07.184120 2889 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bfca5e6-567f-4d15-9e56-b6d59ebbc722-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 16 21:21:07.184189 kubelet[2889]: I0116 21:21:07.184130 2889 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9bfca5e6-567f-4d15-9e56-b6d59ebbc722-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 16 21:21:07.670989 systemd[1]: Removed slice kubepods-besteffort-pod9bfca5e6_567f_4d15_9e56_b6d59ebbc722.slice - libcontainer container kubepods-besteffort-pod9bfca5e6_567f_4d15_9e56_b6d59ebbc722.slice. Jan 16 21:21:08.019733 systemd[1]: Created slice kubepods-besteffort-poda053e2c3_3297_4e1f_bf5a_da2b545cc5db.slice - libcontainer container kubepods-besteffort-poda053e2c3_3297_4e1f_bf5a_da2b545cc5db.slice. Jan 16 21:21:08.070938 kubelet[2889]: I0116 21:21:08.067241 2889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bfca5e6-567f-4d15-9e56-b6d59ebbc722" path="/var/lib/kubelet/pods/9bfca5e6-567f-4d15-9e56-b6d59ebbc722/volumes" Jan 16 21:21:08.103701 kubelet[2889]: I0116 21:21:08.103426 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7wsx\" (UniqueName: \"kubernetes.io/projected/a053e2c3-3297-4e1f-bf5a-da2b545cc5db-kube-api-access-l7wsx\") pod \"whisker-77c5fd8d7d-qxwpz\" (UID: \"a053e2c3-3297-4e1f-bf5a-da2b545cc5db\") " pod="calico-system/whisker-77c5fd8d7d-qxwpz" Jan 16 21:21:08.103701 kubelet[2889]: I0116 21:21:08.103698 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a053e2c3-3297-4e1f-bf5a-da2b545cc5db-whisker-ca-bundle\") pod \"whisker-77c5fd8d7d-qxwpz\" (UID: \"a053e2c3-3297-4e1f-bf5a-da2b545cc5db\") " pod="calico-system/whisker-77c5fd8d7d-qxwpz" Jan 16 21:21:08.104241 kubelet[2889]: I0116 21:21:08.103747 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a053e2c3-3297-4e1f-bf5a-da2b545cc5db-whisker-backend-key-pair\") pod \"whisker-77c5fd8d7d-qxwpz\" (UID: \"a053e2c3-3297-4e1f-bf5a-da2b545cc5db\") " pod="calico-system/whisker-77c5fd8d7d-qxwpz" Jan 16 21:21:08.353714 containerd[1592]: time="2026-01-16T21:21:08.352106476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77c5fd8d7d-qxwpz,Uid:a053e2c3-3297-4e1f-bf5a-da2b545cc5db,Namespace:calico-system,Attempt:0,}" Jan 16 21:21:09.421000 audit: BPF prog-id=175 op=LOAD Jan 16 21:21:09.421000 audit[4753]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffebd1dbdd0 a2=98 a3=1fffffffffffffff items=0 ppid=4613 pid=4753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:09.421000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 21:21:09.422000 audit: BPF prog-id=175 op=UNLOAD Jan 16 21:21:09.422000 audit[4753]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffebd1dbda0 a3=0 items=0 ppid=4613 pid=4753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:09.422000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 21:21:09.424000 audit: BPF prog-id=176 op=LOAD Jan 16 21:21:09.424000 audit[4753]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffebd1dbcb0 a2=94 a3=3 items=0 ppid=4613 pid=4753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:09.424000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 21:21:09.425000 audit: BPF prog-id=176 op=UNLOAD Jan 16 21:21:09.425000 audit[4753]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffebd1dbcb0 a2=94 a3=3 items=0 ppid=4613 pid=4753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:09.425000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 21:21:09.425000 audit: BPF prog-id=177 op=LOAD Jan 16 21:21:09.425000 audit[4753]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffebd1dbcf0 a2=94 a3=7ffebd1dbed0 items=0 ppid=4613 pid=4753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:09.425000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 21:21:09.426000 audit: BPF prog-id=177 op=UNLOAD Jan 16 21:21:09.426000 audit[4753]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffebd1dbcf0 a2=94 a3=7ffebd1dbed0 items=0 ppid=4613 pid=4753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:09.426000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 21:21:09.441000 audit: BPF prog-id=178 op=LOAD Jan 16 21:21:09.441000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe550aaf10 a2=98 a3=3 items=0 ppid=4613 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:09.441000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:21:09.441000 audit: BPF prog-id=178 op=UNLOAD Jan 16 21:21:09.441000 audit[4755]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe550aaee0 a3=0 items=0 ppid=4613 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:09.441000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:21:09.443000 audit: BPF prog-id=179 op=LOAD Jan 16 21:21:09.443000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe550aad00 a2=94 a3=54428f items=0 ppid=4613 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:09.443000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:21:09.443000 audit: BPF prog-id=179 op=UNLOAD Jan 16 21:21:09.443000 audit[4755]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe550aad00 a2=94 a3=54428f items=0 ppid=4613 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:09.443000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:21:09.443000 audit: BPF prog-id=180 op=LOAD Jan 16 21:21:09.443000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe550aad30 a2=94 a3=2 items=0 ppid=4613 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:09.443000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:21:09.443000 audit: BPF prog-id=180 op=UNLOAD Jan 16 21:21:09.443000 audit[4755]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe550aad30 a2=0 a3=2 items=0 ppid=4613 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:09.443000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:21:09.780976 systemd-networkd[1517]: calic88124028b4: Link UP Jan 16 21:21:09.793467 systemd-networkd[1517]: calic88124028b4: Gained carrier Jan 16 21:21:09.900830 containerd[1592]: 2026-01-16 21:21:08.552 [INFO][4694] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 21:21:09.900830 containerd[1592]: 2026-01-16 21:21:08.677 [INFO][4694] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--77c5fd8d7d--qxwpz-eth0 whisker-77c5fd8d7d- calico-system a053e2c3-3297-4e1f-bf5a-da2b545cc5db 1156 0 2026-01-16 21:21:07 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:77c5fd8d7d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-77c5fd8d7d-qxwpz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic88124028b4 [] [] }} ContainerID="8601ed1ae8b2cdf8e17c48306abed5df6014782276132bd633c98bf997936c00" Namespace="calico-system" Pod="whisker-77c5fd8d7d-qxwpz" WorkloadEndpoint="localhost-k8s-whisker--77c5fd8d7d--qxwpz-" Jan 16 21:21:09.900830 containerd[1592]: 2026-01-16 21:21:08.681 [INFO][4694] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8601ed1ae8b2cdf8e17c48306abed5df6014782276132bd633c98bf997936c00" Namespace="calico-system" Pod="whisker-77c5fd8d7d-qxwpz" WorkloadEndpoint="localhost-k8s-whisker--77c5fd8d7d--qxwpz-eth0" Jan 16 21:21:09.900830 containerd[1592]: 2026-01-16 21:21:09.392 [INFO][4714] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8601ed1ae8b2cdf8e17c48306abed5df6014782276132bd633c98bf997936c00" HandleID="k8s-pod-network.8601ed1ae8b2cdf8e17c48306abed5df6014782276132bd633c98bf997936c00" Workload="localhost-k8s-whisker--77c5fd8d7d--qxwpz-eth0" Jan 16 21:21:09.902098 containerd[1592]: 2026-01-16 21:21:09.398 [INFO][4714] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8601ed1ae8b2cdf8e17c48306abed5df6014782276132bd633c98bf997936c00" HandleID="k8s-pod-network.8601ed1ae8b2cdf8e17c48306abed5df6014782276132bd633c98bf997936c00" Workload="localhost-k8s-whisker--77c5fd8d7d--qxwpz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000b6a90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-77c5fd8d7d-qxwpz", "timestamp":"2026-01-16 21:21:09.392246681 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 21:21:09.902098 containerd[1592]: 2026-01-16 21:21:09.399 [INFO][4714] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 21:21:09.902098 containerd[1592]: 2026-01-16 21:21:09.402 [INFO][4714] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 21:21:09.902098 containerd[1592]: 2026-01-16 21:21:09.405 [INFO][4714] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 16 21:21:09.902098 containerd[1592]: 2026-01-16 21:21:09.477 [INFO][4714] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8601ed1ae8b2cdf8e17c48306abed5df6014782276132bd633c98bf997936c00" host="localhost" Jan 16 21:21:09.902098 containerd[1592]: 2026-01-16 21:21:09.532 [INFO][4714] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 16 21:21:09.902098 containerd[1592]: 2026-01-16 21:21:09.583 [INFO][4714] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 16 21:21:09.902098 containerd[1592]: 2026-01-16 21:21:09.600 [INFO][4714] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 16 21:21:09.902098 containerd[1592]: 2026-01-16 21:21:09.615 [INFO][4714] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 16 21:21:09.902098 containerd[1592]: 2026-01-16 21:21:09.615 [INFO][4714] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8601ed1ae8b2cdf8e17c48306abed5df6014782276132bd633c98bf997936c00" host="localhost" Jan 16 21:21:09.903155 containerd[1592]: 2026-01-16 21:21:09.625 [INFO][4714] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8601ed1ae8b2cdf8e17c48306abed5df6014782276132bd633c98bf997936c00 Jan 16 21:21:09.903155 containerd[1592]: 2026-01-16 21:21:09.659 [INFO][4714] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8601ed1ae8b2cdf8e17c48306abed5df6014782276132bd633c98bf997936c00" host="localhost" Jan 16 21:21:09.903155 containerd[1592]: 2026-01-16 21:21:09.694 [INFO][4714] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.8601ed1ae8b2cdf8e17c48306abed5df6014782276132bd633c98bf997936c00" host="localhost" Jan 16 21:21:09.903155 containerd[1592]: 2026-01-16 21:21:09.694 [INFO][4714] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.8601ed1ae8b2cdf8e17c48306abed5df6014782276132bd633c98bf997936c00" host="localhost" Jan 16 21:21:09.903155 containerd[1592]: 2026-01-16 21:21:09.694 [INFO][4714] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 21:21:09.903155 containerd[1592]: 2026-01-16 21:21:09.694 [INFO][4714] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="8601ed1ae8b2cdf8e17c48306abed5df6014782276132bd633c98bf997936c00" HandleID="k8s-pod-network.8601ed1ae8b2cdf8e17c48306abed5df6014782276132bd633c98bf997936c00" Workload="localhost-k8s-whisker--77c5fd8d7d--qxwpz-eth0" Jan 16 21:21:09.905847 containerd[1592]: 2026-01-16 21:21:09.709 [INFO][4694] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8601ed1ae8b2cdf8e17c48306abed5df6014782276132bd633c98bf997936c00" Namespace="calico-system" Pod="whisker-77c5fd8d7d-qxwpz" WorkloadEndpoint="localhost-k8s-whisker--77c5fd8d7d--qxwpz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--77c5fd8d7d--qxwpz-eth0", GenerateName:"whisker-77c5fd8d7d-", Namespace:"calico-system", SelfLink:"", UID:"a053e2c3-3297-4e1f-bf5a-da2b545cc5db", ResourceVersion:"1156", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 21, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77c5fd8d7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-77c5fd8d7d-qxwpz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic88124028b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:21:09.905847 containerd[1592]: 2026-01-16 21:21:09.711 [INFO][4694] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="8601ed1ae8b2cdf8e17c48306abed5df6014782276132bd633c98bf997936c00" Namespace="calico-system" Pod="whisker-77c5fd8d7d-qxwpz" WorkloadEndpoint="localhost-k8s-whisker--77c5fd8d7d--qxwpz-eth0" Jan 16 21:21:09.906195 containerd[1592]: 2026-01-16 21:21:09.712 [INFO][4694] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic88124028b4 ContainerID="8601ed1ae8b2cdf8e17c48306abed5df6014782276132bd633c98bf997936c00" Namespace="calico-system" Pod="whisker-77c5fd8d7d-qxwpz" WorkloadEndpoint="localhost-k8s-whisker--77c5fd8d7d--qxwpz-eth0" Jan 16 21:21:09.906195 containerd[1592]: 2026-01-16 21:21:09.790 [INFO][4694] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8601ed1ae8b2cdf8e17c48306abed5df6014782276132bd633c98bf997936c00" Namespace="calico-system" Pod="whisker-77c5fd8d7d-qxwpz" WorkloadEndpoint="localhost-k8s-whisker--77c5fd8d7d--qxwpz-eth0" Jan 16 21:21:09.907673 containerd[1592]: 2026-01-16 21:21:09.796 [INFO][4694] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8601ed1ae8b2cdf8e17c48306abed5df6014782276132bd633c98bf997936c00" Namespace="calico-system" Pod="whisker-77c5fd8d7d-qxwpz" WorkloadEndpoint="localhost-k8s-whisker--77c5fd8d7d--qxwpz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--77c5fd8d7d--qxwpz-eth0", GenerateName:"whisker-77c5fd8d7d-", Namespace:"calico-system", SelfLink:"", UID:"a053e2c3-3297-4e1f-bf5a-da2b545cc5db", ResourceVersion:"1156", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 21, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77c5fd8d7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8601ed1ae8b2cdf8e17c48306abed5df6014782276132bd633c98bf997936c00", Pod:"whisker-77c5fd8d7d-qxwpz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic88124028b4", MAC:"6e:b4:d0:3d:33:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:21:09.907947 containerd[1592]: 2026-01-16 21:21:09.887 [INFO][4694] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8601ed1ae8b2cdf8e17c48306abed5df6014782276132bd633c98bf997936c00" Namespace="calico-system" Pod="whisker-77c5fd8d7d-qxwpz" WorkloadEndpoint="localhost-k8s-whisker--77c5fd8d7d--qxwpz-eth0" Jan 16 21:21:10.005000 audit: BPF prog-id=181 op=LOAD Jan 16 21:21:10.022835 kernel: kauditd_printk_skb: 41 callbacks suppressed Jan 16 21:21:10.022972 kernel: audit: type=1334 audit(1768598470.005:585): prog-id=181 op=LOAD Jan 16 21:21:10.005000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe550aabf0 a2=94 a3=1 items=0 ppid=4613 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.059881 kernel: audit: type=1300 audit(1768598470.005:585): arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe550aabf0 a2=94 a3=1 items=0 ppid=4613 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.005000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:21:10.089497 kernel: audit: type=1327 audit(1768598470.005:585): proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:21:10.005000 audit: BPF prog-id=181 op=UNLOAD Jan 16 21:21:10.005000 audit[4755]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe550aabf0 a2=94 a3=1 items=0 ppid=4613 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.153175 kernel: audit: type=1334 audit(1768598470.005:586): prog-id=181 op=UNLOAD Jan 16 21:21:10.153521 kernel: audit: type=1300 audit(1768598470.005:586): arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe550aabf0 a2=94 a3=1 items=0 ppid=4613 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.153565 kernel: audit: type=1327 audit(1768598470.005:586): proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:21:10.005000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:21:10.031000 audit: BPF prog-id=182 op=LOAD Jan 16 21:21:10.188795 kernel: audit: type=1334 audit(1768598470.031:587): prog-id=182 op=LOAD Jan 16 21:21:10.188920 kernel: audit: type=1300 audit(1768598470.031:587): arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe550aabe0 a2=94 a3=4 items=0 ppid=4613 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.031000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe550aabe0 a2=94 a3=4 items=0 ppid=4613 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.031000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:21:10.253046 kernel: audit: type=1327 audit(1768598470.031:587): proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:21:10.257172 kernel: audit: type=1334 audit(1768598470.031:588): prog-id=182 op=UNLOAD Jan 16 21:21:10.031000 audit: BPF prog-id=182 op=UNLOAD Jan 16 21:21:10.031000 audit[4755]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe550aabe0 a2=0 a3=4 items=0 ppid=4613 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.031000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:21:10.032000 audit: BPF prog-id=183 op=LOAD Jan 16 21:21:10.032000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe550aaa40 a2=94 a3=5 items=0 ppid=4613 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.032000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:21:10.032000 audit: BPF prog-id=183 op=UNLOAD Jan 16 21:21:10.032000 audit[4755]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe550aaa40 a2=0 a3=5 items=0 ppid=4613 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.032000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:21:10.032000 audit: BPF prog-id=184 op=LOAD Jan 16 21:21:10.032000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe550aac60 a2=94 a3=6 items=0 ppid=4613 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.032000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:21:10.032000 audit: BPF prog-id=184 op=UNLOAD Jan 16 21:21:10.032000 audit[4755]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe550aac60 a2=0 a3=6 items=0 ppid=4613 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.032000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:21:10.034000 audit: BPF prog-id=185 op=LOAD Jan 16 21:21:10.034000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe550aa410 a2=94 a3=88 items=0 ppid=4613 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.034000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:21:10.034000 audit: BPF prog-id=186 op=LOAD Jan 16 21:21:10.034000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffe550aa290 a2=94 a3=2 items=0 ppid=4613 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.034000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:21:10.034000 audit: BPF prog-id=186 op=UNLOAD Jan 16 21:21:10.034000 audit[4755]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffe550aa2c0 a2=0 a3=7ffe550aa3c0 items=0 ppid=4613 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.034000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:21:10.037000 audit: BPF prog-id=185 op=UNLOAD Jan 16 21:21:10.037000 audit[4755]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=32494d10 a2=0 a3=1f21dc052bde7e9e items=0 ppid=4613 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.037000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:21:10.121000 audit: BPF prog-id=187 op=LOAD Jan 16 21:21:10.121000 audit[4767]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe89182720 a2=98 a3=1999999999999999 items=0 ppid=4613 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.121000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 21:21:10.121000 audit: BPF prog-id=187 op=UNLOAD Jan 16 21:21:10.121000 audit[4767]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe891826f0 a3=0 items=0 ppid=4613 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.121000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 21:21:10.121000 audit: BPF prog-id=188 op=LOAD Jan 16 21:21:10.121000 audit[4767]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe89182600 a2=94 a3=ffff items=0 ppid=4613 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.121000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 21:21:10.121000 audit: BPF prog-id=188 op=UNLOAD Jan 16 21:21:10.121000 audit[4767]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe89182600 a2=94 a3=ffff items=0 ppid=4613 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.121000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 21:21:10.121000 audit: BPF prog-id=189 op=LOAD Jan 16 21:21:10.121000 audit[4767]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe89182640 a2=94 a3=7ffe89182820 items=0 ppid=4613 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.121000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 21:21:10.121000 audit: BPF prog-id=189 op=UNLOAD Jan 16 21:21:10.121000 audit[4767]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe89182640 a2=94 a3=7ffe89182820 items=0 ppid=4613 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.121000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 21:21:10.293963 containerd[1592]: time="2026-01-16T21:21:10.293901403Z" level=info msg="connecting to shim 8601ed1ae8b2cdf8e17c48306abed5df6014782276132bd633c98bf997936c00" address="unix:///run/containerd/s/e4468d3ce6b82908c5b6b0c1a453d9d946b189eacbcd624c1565b7ebb0e0c9b7" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:21:10.627942 systemd[1]: Started cri-containerd-8601ed1ae8b2cdf8e17c48306abed5df6014782276132bd633c98bf997936c00.scope - libcontainer container 8601ed1ae8b2cdf8e17c48306abed5df6014782276132bd633c98bf997936c00. Jan 16 21:21:10.663230 systemd-networkd[1517]: vxlan.calico: Link UP Jan 16 21:21:10.663560 systemd-networkd[1517]: vxlan.calico: Gained carrier Jan 16 21:21:10.718000 audit: BPF prog-id=190 op=LOAD Jan 16 21:21:10.722000 audit: BPF prog-id=191 op=LOAD Jan 16 21:21:10.722000 audit[4799]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4786 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836303165643161653862326364663865313763343833303661626564 Jan 16 21:21:10.722000 audit: BPF prog-id=191 op=UNLOAD Jan 16 21:21:10.722000 audit[4799]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4786 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836303165643161653862326364663865313763343833303661626564 Jan 16 21:21:10.722000 audit: BPF prog-id=192 op=LOAD Jan 16 21:21:10.722000 audit[4799]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4786 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836303165643161653862326364663865313763343833303661626564 Jan 16 21:21:10.722000 audit: BPF prog-id=193 op=LOAD Jan 16 21:21:10.722000 audit[4799]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4786 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836303165643161653862326364663865313763343833303661626564 Jan 16 21:21:10.722000 audit: BPF prog-id=193 op=UNLOAD Jan 16 21:21:10.722000 audit[4799]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4786 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836303165643161653862326364663865313763343833303661626564 Jan 16 21:21:10.722000 audit: BPF prog-id=192 op=UNLOAD Jan 16 21:21:10.722000 audit[4799]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4786 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836303165643161653862326364663865313763343833303661626564 Jan 16 21:21:10.722000 audit: BPF prog-id=194 op=LOAD Jan 16 21:21:10.722000 audit[4799]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4786 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836303165643161653862326364663865313763343833303661626564 Jan 16 21:21:10.729250 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 16 21:21:10.784000 audit: BPF prog-id=195 op=LOAD Jan 16 21:21:10.784000 audit[4830]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdc1c688f0 a2=98 a3=0 items=0 ppid=4613 pid=4830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.784000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:21:10.786000 audit: BPF prog-id=195 op=UNLOAD Jan 16 21:21:10.786000 audit[4830]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffdc1c688c0 a3=0 items=0 ppid=4613 pid=4830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.786000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:21:10.786000 audit: BPF prog-id=196 op=LOAD Jan 16 21:21:10.786000 audit[4830]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdc1c68700 a2=94 a3=54428f items=0 ppid=4613 pid=4830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.786000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:21:10.786000 audit: BPF prog-id=196 op=UNLOAD Jan 16 21:21:10.786000 audit[4830]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdc1c68700 a2=94 a3=54428f items=0 ppid=4613 pid=4830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.786000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:21:10.786000 audit: BPF prog-id=197 op=LOAD Jan 16 21:21:10.786000 audit[4830]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdc1c68730 a2=94 a3=2 items=0 ppid=4613 pid=4830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.786000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:21:10.786000 audit: BPF prog-id=197 op=UNLOAD Jan 16 21:21:10.786000 audit[4830]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdc1c68730 a2=0 a3=2 items=0 ppid=4613 pid=4830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.786000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:21:10.786000 audit: BPF prog-id=198 op=LOAD Jan 16 21:21:10.786000 audit[4830]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdc1c684e0 a2=94 a3=4 items=0 ppid=4613 pid=4830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.786000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:21:10.788000 audit: BPF prog-id=198 op=UNLOAD Jan 16 21:21:10.788000 audit[4830]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffdc1c684e0 a2=94 a3=4 items=0 ppid=4613 pid=4830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.788000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:21:10.788000 audit: BPF prog-id=199 op=LOAD Jan 16 21:21:10.788000 audit[4830]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdc1c685e0 a2=94 a3=7ffdc1c68760 items=0 ppid=4613 pid=4830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.788000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:21:10.788000 audit: BPF prog-id=199 op=UNLOAD Jan 16 21:21:10.788000 audit[4830]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffdc1c685e0 a2=0 a3=7ffdc1c68760 items=0 ppid=4613 pid=4830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.788000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:21:10.789000 audit: BPF prog-id=200 op=LOAD Jan 16 21:21:10.789000 audit[4830]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdc1c67d10 a2=94 a3=2 items=0 ppid=4613 pid=4830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.789000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:21:10.789000 audit: BPF prog-id=200 op=UNLOAD Jan 16 21:21:10.789000 audit[4830]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffdc1c67d10 a2=0 a3=2 items=0 ppid=4613 pid=4830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.789000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:21:10.789000 audit: BPF prog-id=201 op=LOAD Jan 16 21:21:10.789000 audit[4830]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdc1c67e10 a2=94 a3=30 items=0 ppid=4613 pid=4830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.789000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:21:10.824000 audit: BPF prog-id=202 op=LOAD Jan 16 21:21:10.824000 audit[4834]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffee2e3e3f0 a2=98 a3=0 items=0 ppid=4613 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.824000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:21:10.824000 audit: BPF prog-id=202 op=UNLOAD Jan 16 21:21:10.824000 audit[4834]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffee2e3e3c0 a3=0 items=0 ppid=4613 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.824000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:21:10.825000 audit: BPF prog-id=203 op=LOAD Jan 16 21:21:10.825000 audit[4834]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffee2e3e1e0 a2=94 a3=54428f items=0 ppid=4613 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.825000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:21:10.825000 audit: BPF prog-id=203 op=UNLOAD Jan 16 21:21:10.825000 audit[4834]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffee2e3e1e0 a2=94 a3=54428f items=0 ppid=4613 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.825000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:21:10.825000 audit: BPF prog-id=204 op=LOAD Jan 16 21:21:10.825000 audit[4834]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffee2e3e210 a2=94 a3=2 items=0 ppid=4613 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.825000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:21:10.825000 audit: BPF prog-id=204 op=UNLOAD Jan 16 21:21:10.825000 audit[4834]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffee2e3e210 a2=0 a3=2 items=0 ppid=4613 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:10.825000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:21:10.961877 containerd[1592]: time="2026-01-16T21:21:10.961767986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77c5fd8d7d-qxwpz,Uid:a053e2c3-3297-4e1f-bf5a-da2b545cc5db,Namespace:calico-system,Attempt:0,} returns sandbox id \"8601ed1ae8b2cdf8e17c48306abed5df6014782276132bd633c98bf997936c00\"" Jan 16 21:21:10.969968 containerd[1592]: time="2026-01-16T21:21:10.969053922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 21:21:11.038059 systemd-networkd[1517]: calic88124028b4: Gained IPv6LL Jan 16 21:21:11.100940 containerd[1592]: time="2026-01-16T21:21:11.095777080Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:21:11.105068 containerd[1592]: time="2026-01-16T21:21:11.104527298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 16 21:21:11.105068 containerd[1592]: time="2026-01-16T21:21:11.104789957Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 21:21:11.107066 kubelet[2889]: E0116 21:21:11.106001 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 21:21:11.108506 kubelet[2889]: E0116 21:21:11.108242 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 21:21:11.108961 kubelet[2889]: E0116 21:21:11.108803 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-77c5fd8d7d-qxwpz_calico-system(a053e2c3-3297-4e1f-bf5a-da2b545cc5db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 21:21:11.114696 containerd[1592]: time="2026-01-16T21:21:11.114122705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 21:21:11.199242 containerd[1592]: time="2026-01-16T21:21:11.198955896Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:21:11.204929 containerd[1592]: time="2026-01-16T21:21:11.204438458Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 21:21:11.204929 containerd[1592]: time="2026-01-16T21:21:11.204852720Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 16 21:21:11.208035 kubelet[2889]: E0116 21:21:11.207023 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 21:21:11.208035 kubelet[2889]: E0116 21:21:11.207078 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 21:21:11.208035 kubelet[2889]: E0116 21:21:11.207163 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-77c5fd8d7d-qxwpz_calico-system(a053e2c3-3297-4e1f-bf5a-da2b545cc5db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 21:21:11.208035 kubelet[2889]: E0116 21:21:11.207219 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77c5fd8d7d-qxwpz" podUID="a053e2c3-3297-4e1f-bf5a-da2b545cc5db" Jan 16 21:21:11.278000 audit: BPF prog-id=205 op=LOAD Jan 16 21:21:11.278000 audit[4834]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffee2e3e0d0 a2=94 a3=1 items=0 ppid=4613 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:11.278000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:21:11.279000 audit: BPF prog-id=205 op=UNLOAD Jan 16 21:21:11.279000 audit[4834]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffee2e3e0d0 a2=94 a3=1 items=0 ppid=4613 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:11.279000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:21:11.297000 audit: BPF prog-id=206 op=LOAD Jan 16 21:21:11.297000 audit[4834]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffee2e3e0c0 a2=94 a3=4 items=0 ppid=4613 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:11.297000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:21:11.297000 audit: BPF prog-id=206 op=UNLOAD Jan 16 21:21:11.297000 audit[4834]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffee2e3e0c0 a2=0 a3=4 items=0 ppid=4613 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:11.297000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:21:11.298000 audit: BPF prog-id=207 op=LOAD Jan 16 21:21:11.298000 audit[4834]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffee2e3df20 a2=94 a3=5 items=0 ppid=4613 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:11.298000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:21:11.298000 audit: BPF prog-id=207 op=UNLOAD Jan 16 21:21:11.298000 audit[4834]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffee2e3df20 a2=0 a3=5 items=0 ppid=4613 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:11.298000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:21:11.299000 audit: BPF prog-id=208 op=LOAD Jan 16 21:21:11.299000 audit[4834]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffee2e3e140 a2=94 a3=6 items=0 ppid=4613 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:11.299000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:21:11.299000 audit: BPF prog-id=208 op=UNLOAD Jan 16 21:21:11.299000 audit[4834]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffee2e3e140 a2=0 a3=6 items=0 ppid=4613 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:11.299000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:21:11.299000 audit: BPF prog-id=209 op=LOAD Jan 16 21:21:11.299000 audit[4834]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffee2e3d8f0 a2=94 a3=88 items=0 ppid=4613 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:11.299000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:21:11.300000 audit: BPF prog-id=210 op=LOAD Jan 16 21:21:11.300000 audit[4834]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffee2e3d770 a2=94 a3=2 items=0 ppid=4613 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:11.300000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:21:11.300000 audit: BPF prog-id=210 op=UNLOAD Jan 16 21:21:11.300000 audit[4834]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffee2e3d7a0 a2=0 a3=7ffee2e3d8a0 items=0 ppid=4613 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:11.300000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:21:11.300000 audit: BPF prog-id=209 op=UNLOAD Jan 16 21:21:11.300000 audit[4834]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=475dd10 a2=0 a3=ad57d0648ecc0c7c items=0 ppid=4613 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:11.300000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:21:11.318000 audit: BPF prog-id=201 op=UNLOAD Jan 16 21:21:11.318000 audit[4613]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000f07440 a2=0 a3=0 items=0 ppid=4598 pid=4613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:11.318000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 16 21:21:11.518000 audit[4869]: NETFILTER_CFG table=nat:119 family=2 entries=15 op=nft_register_chain pid=4869 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 21:21:11.518000 audit[4869]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffdae455e20 a2=0 a3=7ffdae455e0c items=0 ppid=4613 pid=4869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:11.518000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 21:21:11.546000 audit[4866]: NETFILTER_CFG table=raw:120 family=2 entries=21 op=nft_register_chain pid=4866 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 21:21:11.546000 audit[4866]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffc45e4ffc0 a2=0 a3=7ffc45e4ffac items=0 ppid=4613 pid=4866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:11.546000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 21:21:11.562000 audit[4876]: NETFILTER_CFG table=mangle:121 family=2 entries=16 op=nft_register_chain pid=4876 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 21:21:11.562000 audit[4876]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fffd0d6fd40 a2=0 a3=7fffd0d6fd2c items=0 ppid=4613 pid=4876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:11.562000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 21:21:11.565000 audit[4870]: NETFILTER_CFG table=filter:122 family=2 entries=94 op=nft_register_chain pid=4870 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 21:21:11.565000 audit[4870]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffec7bfdda0 a2=0 a3=7ffec7bfdd8c items=0 ppid=4613 pid=4870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:11.565000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 21:21:11.715078 kubelet[2889]: E0116 21:21:11.714965 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77c5fd8d7d-qxwpz" podUID="a053e2c3-3297-4e1f-bf5a-da2b545cc5db" Jan 16 21:21:11.889000 audit[4883]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=4883 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:11.889000 audit[4883]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe6ad86c00 a2=0 a3=7ffe6ad86bec items=0 ppid=3054 pid=4883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:11.889000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:11.897000 audit[4883]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=4883 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:11.897000 audit[4883]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe6ad86c00 a2=0 a3=0 items=0 ppid=3054 pid=4883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:11.897000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:12.699801 systemd-networkd[1517]: vxlan.calico: Gained IPv6LL Jan 16 21:21:12.776773 kubelet[2889]: E0116 21:21:12.720739 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77c5fd8d7d-qxwpz" podUID="a053e2c3-3297-4e1f-bf5a-da2b545cc5db" Jan 16 21:21:13.071836 containerd[1592]: time="2026-01-16T21:21:13.071126859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cb5984987-nz6bj,Uid:ac06912f-e290-4031-a848-1392298fa9de,Namespace:calico-apiserver,Attempt:0,}" Jan 16 21:21:13.086166 containerd[1592]: time="2026-01-16T21:21:13.085846931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-dgkw9,Uid:91f2e5a0-0976-4b7a-ac63-530715dff408,Namespace:calico-system,Attempt:0,}" Jan 16 21:21:13.905209 systemd-networkd[1517]: calia1f58ba8768: Link UP Jan 16 21:21:13.914154 systemd-networkd[1517]: calia1f58ba8768: Gained carrier Jan 16 21:21:13.978714 containerd[1592]: 2026-01-16 21:21:13.435 [INFO][4896] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--dgkw9-eth0 goldmane-7c778bb748- calico-system 91f2e5a0-0976-4b7a-ac63-530715dff408 1003 0 2026-01-16 21:19:46 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-dgkw9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia1f58ba8768 [] [] }} ContainerID="7a1a32d00c71771b98b01571566871cff2a44b4d8e4d2c40e1c7a0ae47d3e59b" Namespace="calico-system" Pod="goldmane-7c778bb748-dgkw9" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--dgkw9-" Jan 16 21:21:13.978714 containerd[1592]: 2026-01-16 21:21:13.435 [INFO][4896] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7a1a32d00c71771b98b01571566871cff2a44b4d8e4d2c40e1c7a0ae47d3e59b" Namespace="calico-system" Pod="goldmane-7c778bb748-dgkw9" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--dgkw9-eth0" Jan 16 21:21:13.978714 containerd[1592]: 2026-01-16 21:21:13.590 [INFO][4916] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a1a32d00c71771b98b01571566871cff2a44b4d8e4d2c40e1c7a0ae47d3e59b" HandleID="k8s-pod-network.7a1a32d00c71771b98b01571566871cff2a44b4d8e4d2c40e1c7a0ae47d3e59b" Workload="localhost-k8s-goldmane--7c778bb748--dgkw9-eth0" Jan 16 21:21:13.979150 containerd[1592]: 2026-01-16 21:21:13.591 [INFO][4916] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7a1a32d00c71771b98b01571566871cff2a44b4d8e4d2c40e1c7a0ae47d3e59b" HandleID="k8s-pod-network.7a1a32d00c71771b98b01571566871cff2a44b4d8e4d2c40e1c7a0ae47d3e59b" Workload="localhost-k8s-goldmane--7c778bb748--dgkw9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003f34b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-dgkw9", "timestamp":"2026-01-16 21:21:13.590720696 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 21:21:13.979150 containerd[1592]: 2026-01-16 21:21:13.591 [INFO][4916] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 21:21:13.979150 containerd[1592]: 2026-01-16 21:21:13.591 [INFO][4916] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 21:21:13.979150 containerd[1592]: 2026-01-16 21:21:13.592 [INFO][4916] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 16 21:21:13.979150 containerd[1592]: 2026-01-16 21:21:13.612 [INFO][4916] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7a1a32d00c71771b98b01571566871cff2a44b4d8e4d2c40e1c7a0ae47d3e59b" host="localhost" Jan 16 21:21:13.979150 containerd[1592]: 2026-01-16 21:21:13.653 [INFO][4916] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 16 21:21:13.979150 containerd[1592]: 2026-01-16 21:21:13.684 [INFO][4916] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 16 21:21:13.979150 containerd[1592]: 2026-01-16 21:21:13.692 [INFO][4916] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 16 21:21:13.979150 containerd[1592]: 2026-01-16 21:21:13.799 [INFO][4916] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 16 21:21:13.979150 containerd[1592]: 2026-01-16 21:21:13.800 [INFO][4916] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7a1a32d00c71771b98b01571566871cff2a44b4d8e4d2c40e1c7a0ae47d3e59b" host="localhost" Jan 16 21:21:13.981112 containerd[1592]: 2026-01-16 21:21:13.805 [INFO][4916] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7a1a32d00c71771b98b01571566871cff2a44b4d8e4d2c40e1c7a0ae47d3e59b Jan 16 21:21:13.981112 containerd[1592]: 2026-01-16 21:21:13.818 [INFO][4916] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7a1a32d00c71771b98b01571566871cff2a44b4d8e4d2c40e1c7a0ae47d3e59b" host="localhost" Jan 16 21:21:13.981112 containerd[1592]: 2026-01-16 21:21:13.861 [INFO][4916] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.7a1a32d00c71771b98b01571566871cff2a44b4d8e4d2c40e1c7a0ae47d3e59b" host="localhost" Jan 16 21:21:13.981112 containerd[1592]: 2026-01-16 21:21:13.861 [INFO][4916] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.7a1a32d00c71771b98b01571566871cff2a44b4d8e4d2c40e1c7a0ae47d3e59b" host="localhost" Jan 16 21:21:13.981112 containerd[1592]: 2026-01-16 21:21:13.864 [INFO][4916] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 21:21:13.981112 containerd[1592]: 2026-01-16 21:21:13.864 [INFO][4916] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="7a1a32d00c71771b98b01571566871cff2a44b4d8e4d2c40e1c7a0ae47d3e59b" HandleID="k8s-pod-network.7a1a32d00c71771b98b01571566871cff2a44b4d8e4d2c40e1c7a0ae47d3e59b" Workload="localhost-k8s-goldmane--7c778bb748--dgkw9-eth0" Jan 16 21:21:13.981535 containerd[1592]: 2026-01-16 21:21:13.881 [INFO][4896] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7a1a32d00c71771b98b01571566871cff2a44b4d8e4d2c40e1c7a0ae47d3e59b" Namespace="calico-system" Pod="goldmane-7c778bb748-dgkw9" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--dgkw9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--dgkw9-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"91f2e5a0-0976-4b7a-ac63-530715dff408", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 19, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-dgkw9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia1f58ba8768", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:21:13.981535 containerd[1592]: 2026-01-16 21:21:13.885 [INFO][4896] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="7a1a32d00c71771b98b01571566871cff2a44b4d8e4d2c40e1c7a0ae47d3e59b" Namespace="calico-system" Pod="goldmane-7c778bb748-dgkw9" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--dgkw9-eth0" Jan 16 21:21:13.981866 containerd[1592]: 2026-01-16 21:21:13.888 [INFO][4896] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia1f58ba8768 ContainerID="7a1a32d00c71771b98b01571566871cff2a44b4d8e4d2c40e1c7a0ae47d3e59b" Namespace="calico-system" Pod="goldmane-7c778bb748-dgkw9" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--dgkw9-eth0" Jan 16 21:21:13.981866 containerd[1592]: 2026-01-16 21:21:13.910 [INFO][4896] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7a1a32d00c71771b98b01571566871cff2a44b4d8e4d2c40e1c7a0ae47d3e59b" Namespace="calico-system" Pod="goldmane-7c778bb748-dgkw9" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--dgkw9-eth0" Jan 16 21:21:13.981949 containerd[1592]: 2026-01-16 21:21:13.914 [INFO][4896] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7a1a32d00c71771b98b01571566871cff2a44b4d8e4d2c40e1c7a0ae47d3e59b" Namespace="calico-system" Pod="goldmane-7c778bb748-dgkw9" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--dgkw9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--dgkw9-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"91f2e5a0-0976-4b7a-ac63-530715dff408", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 19, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7a1a32d00c71771b98b01571566871cff2a44b4d8e4d2c40e1c7a0ae47d3e59b", Pod:"goldmane-7c778bb748-dgkw9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia1f58ba8768", MAC:"a6:f8:90:67:4f:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:21:13.982218 containerd[1592]: 2026-01-16 21:21:13.951 [INFO][4896] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7a1a32d00c71771b98b01571566871cff2a44b4d8e4d2c40e1c7a0ae47d3e59b" Namespace="calico-system" Pod="goldmane-7c778bb748-dgkw9" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--dgkw9-eth0" Jan 16 21:21:14.074690 containerd[1592]: time="2026-01-16T21:21:14.072978877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cb5984987-cb6w2,Uid:7e12a227-9190-436c-a55f-74274779eb32,Namespace:calico-apiserver,Attempt:0,}" Jan 16 21:21:14.083475 kubelet[2889]: E0116 21:21:14.080459 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:14.095863 containerd[1592]: time="2026-01-16T21:21:14.095714731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-s5mq8,Uid:0752a7cf-5d05-4249-a029-fa96ed25d7e6,Namespace:kube-system,Attempt:0,}" Jan 16 21:21:14.116863 systemd-networkd[1517]: cali2ca61474304: Link UP Jan 16 21:21:14.121969 systemd-networkd[1517]: cali2ca61474304: Gained carrier Jan 16 21:21:14.118000 audit[4936]: NETFILTER_CFG table=filter:125 family=2 entries=44 op=nft_register_chain pid=4936 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 21:21:14.118000 audit[4936]: SYSCALL arch=c000003e syscall=46 success=yes exit=25180 a0=3 a1=7ffecbaf2670 a2=0 a3=7ffecbaf265c items=0 ppid=4613 pid=4936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:14.118000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 21:21:14.331554 containerd[1592]: 2026-01-16 21:21:13.378 [INFO][4885] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6cb5984987--nz6bj-eth0 calico-apiserver-6cb5984987- calico-apiserver ac06912f-e290-4031-a848-1392298fa9de 996 0 2026-01-16 21:19:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cb5984987 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6cb5984987-nz6bj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2ca61474304 [] [] }} ContainerID="6f53f709ad43efb3ea070336098785794fc15eed36f4e0d0d1f26a770a337298" Namespace="calico-apiserver" Pod="calico-apiserver-6cb5984987-nz6bj" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cb5984987--nz6bj-" Jan 16 21:21:14.331554 containerd[1592]: 2026-01-16 21:21:13.385 [INFO][4885] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6f53f709ad43efb3ea070336098785794fc15eed36f4e0d0d1f26a770a337298" Namespace="calico-apiserver" Pod="calico-apiserver-6cb5984987-nz6bj" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cb5984987--nz6bj-eth0" Jan 16 21:21:14.331554 containerd[1592]: 2026-01-16 21:21:13.588 [INFO][4910] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6f53f709ad43efb3ea070336098785794fc15eed36f4e0d0d1f26a770a337298" HandleID="k8s-pod-network.6f53f709ad43efb3ea070336098785794fc15eed36f4e0d0d1f26a770a337298" Workload="localhost-k8s-calico--apiserver--6cb5984987--nz6bj-eth0" Jan 16 21:21:14.332090 containerd[1592]: 2026-01-16 21:21:13.589 [INFO][4910] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6f53f709ad43efb3ea070336098785794fc15eed36f4e0d0d1f26a770a337298" HandleID="k8s-pod-network.6f53f709ad43efb3ea070336098785794fc15eed36f4e0d0d1f26a770a337298" Workload="localhost-k8s-calico--apiserver--6cb5984987--nz6bj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000490bf0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6cb5984987-nz6bj", "timestamp":"2026-01-16 21:21:13.588861431 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 21:21:14.332090 containerd[1592]: 2026-01-16 21:21:13.593 [INFO][4910] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 21:21:14.332090 containerd[1592]: 2026-01-16 21:21:13.861 [INFO][4910] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 21:21:14.332090 containerd[1592]: 2026-01-16 21:21:13.863 [INFO][4910] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 16 21:21:14.332090 containerd[1592]: 2026-01-16 21:21:13.904 [INFO][4910] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6f53f709ad43efb3ea070336098785794fc15eed36f4e0d0d1f26a770a337298" host="localhost" Jan 16 21:21:14.332090 containerd[1592]: 2026-01-16 21:21:13.942 [INFO][4910] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 16 21:21:14.332090 containerd[1592]: 2026-01-16 21:21:13.980 [INFO][4910] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 16 21:21:14.332090 containerd[1592]: 2026-01-16 21:21:14.005 [INFO][4910] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 16 21:21:14.332090 containerd[1592]: 2026-01-16 21:21:14.016 [INFO][4910] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 16 21:21:14.332090 containerd[1592]: 2026-01-16 21:21:14.016 [INFO][4910] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6f53f709ad43efb3ea070336098785794fc15eed36f4e0d0d1f26a770a337298" host="localhost" Jan 16 21:21:14.332910 containerd[1592]: 2026-01-16 21:21:14.022 [INFO][4910] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6f53f709ad43efb3ea070336098785794fc15eed36f4e0d0d1f26a770a337298 Jan 16 21:21:14.332910 containerd[1592]: 2026-01-16 21:21:14.040 [INFO][4910] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6f53f709ad43efb3ea070336098785794fc15eed36f4e0d0d1f26a770a337298" host="localhost" Jan 16 21:21:14.332910 containerd[1592]: 2026-01-16 21:21:14.085 [INFO][4910] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.6f53f709ad43efb3ea070336098785794fc15eed36f4e0d0d1f26a770a337298" host="localhost" Jan 16 21:21:14.332910 containerd[1592]: 2026-01-16 21:21:14.085 [INFO][4910] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.6f53f709ad43efb3ea070336098785794fc15eed36f4e0d0d1f26a770a337298" host="localhost" Jan 16 21:21:14.332910 containerd[1592]: 2026-01-16 21:21:14.085 [INFO][4910] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 21:21:14.332910 containerd[1592]: 2026-01-16 21:21:14.086 [INFO][4910] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="6f53f709ad43efb3ea070336098785794fc15eed36f4e0d0d1f26a770a337298" HandleID="k8s-pod-network.6f53f709ad43efb3ea070336098785794fc15eed36f4e0d0d1f26a770a337298" Workload="localhost-k8s-calico--apiserver--6cb5984987--nz6bj-eth0" Jan 16 21:21:14.333093 containerd[1592]: 2026-01-16 21:21:14.101 [INFO][4885] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6f53f709ad43efb3ea070336098785794fc15eed36f4e0d0d1f26a770a337298" Namespace="calico-apiserver" Pod="calico-apiserver-6cb5984987-nz6bj" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cb5984987--nz6bj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cb5984987--nz6bj-eth0", GenerateName:"calico-apiserver-6cb5984987-", Namespace:"calico-apiserver", SelfLink:"", UID:"ac06912f-e290-4031-a848-1392298fa9de", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 19, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cb5984987", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6cb5984987-nz6bj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2ca61474304", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:21:14.342739 containerd[1592]: 2026-01-16 21:21:14.101 [INFO][4885] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="6f53f709ad43efb3ea070336098785794fc15eed36f4e0d0d1f26a770a337298" Namespace="calico-apiserver" Pod="calico-apiserver-6cb5984987-nz6bj" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cb5984987--nz6bj-eth0" Jan 16 21:21:14.342739 containerd[1592]: 2026-01-16 21:21:14.101 [INFO][4885] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ca61474304 ContainerID="6f53f709ad43efb3ea070336098785794fc15eed36f4e0d0d1f26a770a337298" Namespace="calico-apiserver" Pod="calico-apiserver-6cb5984987-nz6bj" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cb5984987--nz6bj-eth0" Jan 16 21:21:14.342739 containerd[1592]: 2026-01-16 21:21:14.119 [INFO][4885] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6f53f709ad43efb3ea070336098785794fc15eed36f4e0d0d1f26a770a337298" Namespace="calico-apiserver" Pod="calico-apiserver-6cb5984987-nz6bj" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cb5984987--nz6bj-eth0" Jan 16 21:21:14.342877 containerd[1592]: 2026-01-16 21:21:14.129 [INFO][4885] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6f53f709ad43efb3ea070336098785794fc15eed36f4e0d0d1f26a770a337298" Namespace="calico-apiserver" Pod="calico-apiserver-6cb5984987-nz6bj" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cb5984987--nz6bj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cb5984987--nz6bj-eth0", GenerateName:"calico-apiserver-6cb5984987-", Namespace:"calico-apiserver", SelfLink:"", UID:"ac06912f-e290-4031-a848-1392298fa9de", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 19, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cb5984987", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6f53f709ad43efb3ea070336098785794fc15eed36f4e0d0d1f26a770a337298", Pod:"calico-apiserver-6cb5984987-nz6bj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2ca61474304", MAC:"4e:6c:89:81:b5:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:21:14.343143 containerd[1592]: 2026-01-16 21:21:14.296 [INFO][4885] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6f53f709ad43efb3ea070336098785794fc15eed36f4e0d0d1f26a770a337298" Namespace="calico-apiserver" Pod="calico-apiserver-6cb5984987-nz6bj" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cb5984987--nz6bj-eth0" Jan 16 21:21:14.563000 audit[4993]: NETFILTER_CFG table=filter:126 family=2 entries=54 op=nft_register_chain pid=4993 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 21:21:14.563000 audit[4993]: SYSCALL arch=c000003e syscall=46 success=yes exit=29396 a0=3 a1=7ffd0ac5d3c0 a2=0 a3=7ffd0ac5d3ac items=0 ppid=4613 pid=4993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:14.563000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 21:21:14.588211 containerd[1592]: time="2026-01-16T21:21:14.587147057Z" level=info msg="connecting to shim 7a1a32d00c71771b98b01571566871cff2a44b4d8e4d2c40e1c7a0ae47d3e59b" address="unix:///run/containerd/s/b8539200ab6f278939875720b9d76cd7a1dbb7f93e31853ddd0af71688359bd3" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:21:14.813108 containerd[1592]: time="2026-01-16T21:21:14.812699484Z" level=info msg="connecting to shim 6f53f709ad43efb3ea070336098785794fc15eed36f4e0d0d1f26a770a337298" address="unix:///run/containerd/s/083347486bdb81a79c177fab84bd9285ab40ad9850e079a59132cf0567a3273b" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:21:14.820939 systemd[1]: Started cri-containerd-7a1a32d00c71771b98b01571566871cff2a44b4d8e4d2c40e1c7a0ae47d3e59b.scope - libcontainer container 7a1a32d00c71771b98b01571566871cff2a44b4d8e4d2c40e1c7a0ae47d3e59b. Jan 16 21:21:14.914000 audit: BPF prog-id=211 op=LOAD Jan 16 21:21:14.919000 audit: BPF prog-id=212 op=LOAD Jan 16 21:21:14.919000 audit[5001]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4984 pid=5001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:14.919000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761316133326430306337313737316239386230313537313536363837 Jan 16 21:21:14.924000 audit: BPF prog-id=212 op=UNLOAD Jan 16 21:21:14.924000 audit[5001]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4984 pid=5001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:14.924000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761316133326430306337313737316239386230313537313536363837 Jan 16 21:21:14.926000 audit: BPF prog-id=213 op=LOAD Jan 16 21:21:14.926000 audit[5001]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4984 pid=5001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:14.926000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761316133326430306337313737316239386230313537313536363837 Jan 16 21:21:14.928000 audit: BPF prog-id=214 op=LOAD Jan 16 21:21:14.928000 audit[5001]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4984 pid=5001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:14.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761316133326430306337313737316239386230313537313536363837 Jan 16 21:21:14.928000 audit: BPF prog-id=214 op=UNLOAD Jan 16 21:21:14.928000 audit[5001]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4984 pid=5001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:14.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761316133326430306337313737316239386230313537313536363837 Jan 16 21:21:14.928000 audit: BPF prog-id=213 op=UNLOAD Jan 16 21:21:14.928000 audit[5001]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4984 pid=5001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:14.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761316133326430306337313737316239386230313537313536363837 Jan 16 21:21:14.929000 audit: BPF prog-id=215 op=LOAD Jan 16 21:21:14.929000 audit[5001]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4984 pid=5001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:14.929000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761316133326430306337313737316239386230313537313536363837 Jan 16 21:21:14.936829 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 16 21:21:15.003191 systemd-networkd[1517]: calia1f58ba8768: Gained IPv6LL Jan 16 21:21:15.082943 kubelet[2889]: E0116 21:21:15.078245 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:15.083205 containerd[1592]: time="2026-01-16T21:21:15.082739337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-j4h6j,Uid:551adc74-4bae-43fc-8b67-656cbc70d543,Namespace:kube-system,Attempt:0,}" Jan 16 21:21:15.092484 systemd[1]: Started cri-containerd-6f53f709ad43efb3ea070336098785794fc15eed36f4e0d0d1f26a770a337298.scope - libcontainer container 6f53f709ad43efb3ea070336098785794fc15eed36f4e0d0d1f26a770a337298. Jan 16 21:21:15.256023 containerd[1592]: time="2026-01-16T21:21:15.252440920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-dgkw9,Uid:91f2e5a0-0976-4b7a-ac63-530715dff408,Namespace:calico-system,Attempt:0,} returns sandbox id \"7a1a32d00c71771b98b01571566871cff2a44b4d8e4d2c40e1c7a0ae47d3e59b\"" Jan 16 21:21:15.271371 containerd[1592]: time="2026-01-16T21:21:15.270056150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 21:21:15.291000 audit: BPF prog-id=216 op=LOAD Jan 16 21:21:15.304832 kernel: kauditd_printk_skb: 208 callbacks suppressed Jan 16 21:21:15.305220 kernel: audit: type=1334 audit(1768598475.291:659): prog-id=216 op=LOAD Jan 16 21:21:15.319015 kernel: audit: type=1334 audit(1768598475.296:660): prog-id=217 op=LOAD Jan 16 21:21:15.296000 audit: BPF prog-id=217 op=LOAD Jan 16 21:21:15.319229 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 16 21:21:15.330914 kernel: audit: type=1300 audit(1768598475.296:660): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=5024 pid=5056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.296000 audit[5056]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=5024 pid=5056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.379512 kernel: audit: type=1327 audit(1768598475.296:660): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666353366373039616434336566623365613037303333363039383738 Jan 16 21:21:15.296000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666353366373039616434336566623365613037303333363039383738 Jan 16 21:21:15.385438 containerd[1592]: time="2026-01-16T21:21:15.384401435Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:21:15.393467 containerd[1592]: time="2026-01-16T21:21:15.391727271Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 21:21:15.393467 containerd[1592]: time="2026-01-16T21:21:15.392007503Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 16 21:21:15.393726 kubelet[2889]: E0116 21:21:15.392135 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 21:21:15.393726 kubelet[2889]: E0116 21:21:15.392180 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 21:21:15.393726 kubelet[2889]: E0116 21:21:15.392996 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-dgkw9_calico-system(91f2e5a0-0976-4b7a-ac63-530715dff408): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 21:21:15.393726 kubelet[2889]: E0116 21:21:15.393043 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dgkw9" podUID="91f2e5a0-0976-4b7a-ac63-530715dff408" Jan 16 21:21:15.296000 audit: BPF prog-id=217 op=UNLOAD Jan 16 21:21:15.450081 kernel: audit: type=1334 audit(1768598475.296:661): prog-id=217 op=UNLOAD Jan 16 21:21:15.296000 audit[5056]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5024 pid=5056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.501759 kernel: audit: type=1300 audit(1768598475.296:661): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5024 pid=5056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.296000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666353366373039616434336566623365613037303333363039383738 Jan 16 21:21:15.529931 systemd-networkd[1517]: cali35f247a6010: Link UP Jan 16 21:21:15.560489 kernel: audit: type=1327 audit(1768598475.296:661): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666353366373039616434336566623365613037303333363039383738 Jan 16 21:21:15.297000 audit: BPF prog-id=218 op=LOAD Jan 16 21:21:15.581154 kernel: audit: type=1334 audit(1768598475.297:662): prog-id=218 op=LOAD Jan 16 21:21:15.297000 audit[5056]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=5024 pid=5056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.641217 kernel: audit: type=1300 audit(1768598475.297:662): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=5024 pid=5056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.641529 kernel: audit: type=1327 audit(1768598475.297:662): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666353366373039616434336566623365613037303333363039383738 Jan 16 21:21:15.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666353366373039616434336566623365613037303333363039383738 Jan 16 21:21:15.639089 systemd-networkd[1517]: cali35f247a6010: Gained carrier Jan 16 21:21:15.298000 audit: BPF prog-id=219 op=LOAD Jan 16 21:21:15.298000 audit[5056]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=5024 pid=5056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666353366373039616434336566623365613037303333363039383738 Jan 16 21:21:15.298000 audit: BPF prog-id=219 op=UNLOAD Jan 16 21:21:15.298000 audit[5056]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5024 pid=5056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666353366373039616434336566623365613037303333363039383738 Jan 16 21:21:15.298000 audit: BPF prog-id=218 op=UNLOAD Jan 16 21:21:15.298000 audit[5056]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5024 pid=5056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666353366373039616434336566623365613037303333363039383738 Jan 16 21:21:15.298000 audit: BPF prog-id=220 op=LOAD Jan 16 21:21:15.298000 audit[5056]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=5024 pid=5056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666353366373039616434336566623365613037303333363039383738 Jan 16 21:21:15.795186 kubelet[2889]: E0116 21:21:15.794912 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dgkw9" podUID="91f2e5a0-0976-4b7a-ac63-530715dff408" Jan 16 21:21:15.826238 containerd[1592]: 2026-01-16 21:21:14.790 [INFO][4947] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--s5mq8-eth0 coredns-66bc5c9577- kube-system 0752a7cf-5d05-4249-a029-fa96ed25d7e6 1001 0 2026-01-16 21:19:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-s5mq8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali35f247a6010 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="5abe4b5e9016fb7bcc1b7f8790cbdb97ba2116c3e3b675cc2f39915a3a842acf" Namespace="kube-system" Pod="coredns-66bc5c9577-s5mq8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--s5mq8-" Jan 16 21:21:15.826238 containerd[1592]: 2026-01-16 21:21:14.805 [INFO][4947] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5abe4b5e9016fb7bcc1b7f8790cbdb97ba2116c3e3b675cc2f39915a3a842acf" Namespace="kube-system" Pod="coredns-66bc5c9577-s5mq8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--s5mq8-eth0" Jan 16 21:21:15.826238 containerd[1592]: 2026-01-16 21:21:15.199 [INFO][5038] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5abe4b5e9016fb7bcc1b7f8790cbdb97ba2116c3e3b675cc2f39915a3a842acf" HandleID="k8s-pod-network.5abe4b5e9016fb7bcc1b7f8790cbdb97ba2116c3e3b675cc2f39915a3a842acf" Workload="localhost-k8s-coredns--66bc5c9577--s5mq8-eth0" Jan 16 21:21:15.830225 containerd[1592]: 2026-01-16 21:21:15.203 [INFO][5038] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5abe4b5e9016fb7bcc1b7f8790cbdb97ba2116c3e3b675cc2f39915a3a842acf" HandleID="k8s-pod-network.5abe4b5e9016fb7bcc1b7f8790cbdb97ba2116c3e3b675cc2f39915a3a842acf" Workload="localhost-k8s-coredns--66bc5c9577--s5mq8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00019c220), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-s5mq8", "timestamp":"2026-01-16 21:21:15.199101811 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 21:21:15.830225 containerd[1592]: 2026-01-16 21:21:15.203 [INFO][5038] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 21:21:15.830225 containerd[1592]: 2026-01-16 21:21:15.203 [INFO][5038] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 21:21:15.830225 containerd[1592]: 2026-01-16 21:21:15.203 [INFO][5038] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 16 21:21:15.830225 containerd[1592]: 2026-01-16 21:21:15.242 [INFO][5038] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5abe4b5e9016fb7bcc1b7f8790cbdb97ba2116c3e3b675cc2f39915a3a842acf" host="localhost" Jan 16 21:21:15.830225 containerd[1592]: 2026-01-16 21:21:15.269 [INFO][5038] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 16 21:21:15.830225 containerd[1592]: 2026-01-16 21:21:15.313 [INFO][5038] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 16 21:21:15.830225 containerd[1592]: 2026-01-16 21:21:15.324 [INFO][5038] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 16 21:21:15.830225 containerd[1592]: 2026-01-16 21:21:15.335 [INFO][5038] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 16 21:21:15.830225 containerd[1592]: 2026-01-16 21:21:15.335 [INFO][5038] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5abe4b5e9016fb7bcc1b7f8790cbdb97ba2116c3e3b675cc2f39915a3a842acf" host="localhost" Jan 16 21:21:15.831786 containerd[1592]: 2026-01-16 21:21:15.343 [INFO][5038] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5abe4b5e9016fb7bcc1b7f8790cbdb97ba2116c3e3b675cc2f39915a3a842acf Jan 16 21:21:15.831786 containerd[1592]: 2026-01-16 21:21:15.380 [INFO][5038] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5abe4b5e9016fb7bcc1b7f8790cbdb97ba2116c3e3b675cc2f39915a3a842acf" host="localhost" Jan 16 21:21:15.831786 containerd[1592]: 2026-01-16 21:21:15.437 [INFO][5038] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.5abe4b5e9016fb7bcc1b7f8790cbdb97ba2116c3e3b675cc2f39915a3a842acf" host="localhost" Jan 16 21:21:15.831786 containerd[1592]: 2026-01-16 21:21:15.439 [INFO][5038] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.5abe4b5e9016fb7bcc1b7f8790cbdb97ba2116c3e3b675cc2f39915a3a842acf" host="localhost" Jan 16 21:21:15.831786 containerd[1592]: 2026-01-16 21:21:15.452 [INFO][5038] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 21:21:15.831786 containerd[1592]: 2026-01-16 21:21:15.453 [INFO][5038] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="5abe4b5e9016fb7bcc1b7f8790cbdb97ba2116c3e3b675cc2f39915a3a842acf" HandleID="k8s-pod-network.5abe4b5e9016fb7bcc1b7f8790cbdb97ba2116c3e3b675cc2f39915a3a842acf" Workload="localhost-k8s-coredns--66bc5c9577--s5mq8-eth0" Jan 16 21:21:15.832803 containerd[1592]: 2026-01-16 21:21:15.487 [INFO][4947] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5abe4b5e9016fb7bcc1b7f8790cbdb97ba2116c3e3b675cc2f39915a3a842acf" Namespace="kube-system" Pod="coredns-66bc5c9577-s5mq8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--s5mq8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--s5mq8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"0752a7cf-5d05-4249-a029-fa96ed25d7e6", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 19, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-s5mq8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35f247a6010", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:21:15.832803 containerd[1592]: 2026-01-16 21:21:15.487 [INFO][4947] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="5abe4b5e9016fb7bcc1b7f8790cbdb97ba2116c3e3b675cc2f39915a3a842acf" Namespace="kube-system" Pod="coredns-66bc5c9577-s5mq8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--s5mq8-eth0" Jan 16 21:21:15.832803 containerd[1592]: 2026-01-16 21:21:15.490 [INFO][4947] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali35f247a6010 ContainerID="5abe4b5e9016fb7bcc1b7f8790cbdb97ba2116c3e3b675cc2f39915a3a842acf" Namespace="kube-system" Pod="coredns-66bc5c9577-s5mq8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--s5mq8-eth0" Jan 16 21:21:15.832803 containerd[1592]: 2026-01-16 21:21:15.636 [INFO][4947] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5abe4b5e9016fb7bcc1b7f8790cbdb97ba2116c3e3b675cc2f39915a3a842acf" Namespace="kube-system" Pod="coredns-66bc5c9577-s5mq8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--s5mq8-eth0" Jan 16 21:21:15.832803 containerd[1592]: 2026-01-16 21:21:15.664 [INFO][4947] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5abe4b5e9016fb7bcc1b7f8790cbdb97ba2116c3e3b675cc2f39915a3a842acf" Namespace="kube-system" Pod="coredns-66bc5c9577-s5mq8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--s5mq8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--s5mq8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"0752a7cf-5d05-4249-a029-fa96ed25d7e6", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 19, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5abe4b5e9016fb7bcc1b7f8790cbdb97ba2116c3e3b675cc2f39915a3a842acf", Pod:"coredns-66bc5c9577-s5mq8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35f247a6010", MAC:"1a:bc:d5:2e:3f:69", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:21:15.832803 containerd[1592]: 2026-01-16 21:21:15.758 [INFO][4947] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5abe4b5e9016fb7bcc1b7f8790cbdb97ba2116c3e3b675cc2f39915a3a842acf" Namespace="kube-system" Pod="coredns-66bc5c9577-s5mq8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--s5mq8-eth0" Jan 16 21:21:15.834920 containerd[1592]: time="2026-01-16T21:21:15.834170703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cb5984987-nz6bj,Uid:ac06912f-e290-4031-a848-1392298fa9de,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6f53f709ad43efb3ea070336098785794fc15eed36f4e0d0d1f26a770a337298\"" Jan 16 21:21:15.860912 containerd[1592]: time="2026-01-16T21:21:15.860763325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 21:21:16.004869 containerd[1592]: time="2026-01-16T21:21:15.998912273Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:21:16.003000 audit[5128]: NETFILTER_CFG table=filter:127 family=2 entries=20 op=nft_register_rule pid=5128 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:16.003000 audit[5128]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd50019d10 a2=0 a3=7ffd50019cfc items=0 ppid=3054 pid=5128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:16.003000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:16.030771 systemd-networkd[1517]: cali2ca61474304: Gained IPv6LL Jan 16 21:21:16.082825 kubelet[2889]: E0116 21:21:16.075126 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:16.112213 containerd[1592]: time="2026-01-16T21:21:16.112157040Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 21:21:16.116879 containerd[1592]: time="2026-01-16T21:21:16.116841233Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 21:21:16.124207 containerd[1592]: time="2026-01-16T21:21:16.120214924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c7c579b5f-dsr85,Uid:f80ed623-af1a-45e7-a125-0c7c2229f592,Namespace:calico-system,Attempt:0,}" Jan 16 21:21:16.132552 kubelet[2889]: E0116 21:21:16.132497 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:21:16.175000 audit[5128]: NETFILTER_CFG table=nat:128 family=2 entries=14 op=nft_register_rule pid=5128 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:16.175000 audit[5128]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd50019d10 a2=0 a3=0 items=0 ppid=3054 pid=5128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:16.175000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:16.193000 audit[5127]: NETFILTER_CFG table=filter:129 family=2 entries=50 op=nft_register_chain pid=5127 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 21:21:16.193000 audit[5127]: SYSCALL arch=c000003e syscall=46 success=yes exit=24928 a0=3 a1=7ffc0174f330 a2=0 a3=7ffc0174f31c items=0 ppid=4613 pid=5127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:16.193000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 21:21:16.241124 containerd[1592]: time="2026-01-16T21:21:16.204860273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6hngd,Uid:b99000d7-a136-4299-82d0-76fa7e3c28f2,Namespace:calico-system,Attempt:0,}" Jan 16 21:21:16.241200 kubelet[2889]: E0116 21:21:16.132837 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:21:16.241200 kubelet[2889]: E0116 21:21:16.132948 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6cb5984987-nz6bj_calico-apiserver(ac06912f-e290-4031-a848-1392298fa9de): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 21:21:16.241200 kubelet[2889]: E0116 21:21:16.132994 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" podUID="ac06912f-e290-4031-a848-1392298fa9de" Jan 16 21:21:16.254822 containerd[1592]: time="2026-01-16T21:21:16.253023761Z" level=info msg="connecting to shim 5abe4b5e9016fb7bcc1b7f8790cbdb97ba2116c3e3b675cc2f39915a3a842acf" address="unix:///run/containerd/s/545efd2d929e1f50ccd0927e20537327e6cade78a510c7d32118bfc74d3f73c7" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:21:16.499778 systemd-networkd[1517]: cali7db0a90b3a3: Link UP Jan 16 21:21:16.500090 systemd-networkd[1517]: cali7db0a90b3a3: Gained carrier Jan 16 21:21:16.639442 containerd[1592]: 2026-01-16 21:21:14.745 [INFO][4944] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6cb5984987--cb6w2-eth0 calico-apiserver-6cb5984987- calico-apiserver 7e12a227-9190-436c-a55f-74274779eb32 1008 0 2026-01-16 21:19:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cb5984987 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6cb5984987-cb6w2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7db0a90b3a3 [] [] }} ContainerID="cfe9d3a3878f3419059aebad06e0edfcdc1dcd3c9d06552472bf0a613e70545f" Namespace="calico-apiserver" Pod="calico-apiserver-6cb5984987-cb6w2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cb5984987--cb6w2-" Jan 16 21:21:16.639442 containerd[1592]: 2026-01-16 21:21:14.747 [INFO][4944] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cfe9d3a3878f3419059aebad06e0edfcdc1dcd3c9d06552472bf0a613e70545f" Namespace="calico-apiserver" Pod="calico-apiserver-6cb5984987-cb6w2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cb5984987--cb6w2-eth0" Jan 16 21:21:16.639442 containerd[1592]: 2026-01-16 21:21:15.211 [INFO][5047] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cfe9d3a3878f3419059aebad06e0edfcdc1dcd3c9d06552472bf0a613e70545f" HandleID="k8s-pod-network.cfe9d3a3878f3419059aebad06e0edfcdc1dcd3c9d06552472bf0a613e70545f" Workload="localhost-k8s-calico--apiserver--6cb5984987--cb6w2-eth0" Jan 16 21:21:16.639442 containerd[1592]: 2026-01-16 21:21:15.226 [INFO][5047] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cfe9d3a3878f3419059aebad06e0edfcdc1dcd3c9d06552472bf0a613e70545f" HandleID="k8s-pod-network.cfe9d3a3878f3419059aebad06e0edfcdc1dcd3c9d06552472bf0a613e70545f" Workload="localhost-k8s-calico--apiserver--6cb5984987--cb6w2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033fb80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6cb5984987-cb6w2", "timestamp":"2026-01-16 21:21:15.211915639 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 21:21:16.639442 containerd[1592]: 2026-01-16 21:21:15.228 [INFO][5047] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 21:21:16.639442 containerd[1592]: 2026-01-16 21:21:15.440 [INFO][5047] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 21:21:16.639442 containerd[1592]: 2026-01-16 21:21:15.440 [INFO][5047] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 16 21:21:16.639442 containerd[1592]: 2026-01-16 21:21:15.538 [INFO][5047] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cfe9d3a3878f3419059aebad06e0edfcdc1dcd3c9d06552472bf0a613e70545f" host="localhost" Jan 16 21:21:16.639442 containerd[1592]: 2026-01-16 21:21:15.685 [INFO][5047] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 16 21:21:16.639442 containerd[1592]: 2026-01-16 21:21:15.864 [INFO][5047] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 16 21:21:16.639442 containerd[1592]: 2026-01-16 21:21:15.887 [INFO][5047] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 16 21:21:16.639442 containerd[1592]: 2026-01-16 21:21:15.940 [INFO][5047] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 16 21:21:16.639442 containerd[1592]: 2026-01-16 21:21:15.940 [INFO][5047] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cfe9d3a3878f3419059aebad06e0edfcdc1dcd3c9d06552472bf0a613e70545f" host="localhost" Jan 16 21:21:16.639442 containerd[1592]: 2026-01-16 21:21:15.981 [INFO][5047] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cfe9d3a3878f3419059aebad06e0edfcdc1dcd3c9d06552472bf0a613e70545f Jan 16 21:21:16.639442 containerd[1592]: 2026-01-16 21:21:16.230 [INFO][5047] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cfe9d3a3878f3419059aebad06e0edfcdc1dcd3c9d06552472bf0a613e70545f" host="localhost" Jan 16 21:21:16.639442 containerd[1592]: 2026-01-16 21:21:16.329 [INFO][5047] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.cfe9d3a3878f3419059aebad06e0edfcdc1dcd3c9d06552472bf0a613e70545f" host="localhost" Jan 16 21:21:16.639442 containerd[1592]: 2026-01-16 21:21:16.336 [INFO][5047] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.cfe9d3a3878f3419059aebad06e0edfcdc1dcd3c9d06552472bf0a613e70545f" host="localhost" Jan 16 21:21:16.639442 containerd[1592]: 2026-01-16 21:21:16.342 [INFO][5047] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 21:21:16.639442 containerd[1592]: 2026-01-16 21:21:16.352 [INFO][5047] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="cfe9d3a3878f3419059aebad06e0edfcdc1dcd3c9d06552472bf0a613e70545f" HandleID="k8s-pod-network.cfe9d3a3878f3419059aebad06e0edfcdc1dcd3c9d06552472bf0a613e70545f" Workload="localhost-k8s-calico--apiserver--6cb5984987--cb6w2-eth0" Jan 16 21:21:16.641871 containerd[1592]: 2026-01-16 21:21:16.482 [INFO][4944] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cfe9d3a3878f3419059aebad06e0edfcdc1dcd3c9d06552472bf0a613e70545f" Namespace="calico-apiserver" Pod="calico-apiserver-6cb5984987-cb6w2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cb5984987--cb6w2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cb5984987--cb6w2-eth0", GenerateName:"calico-apiserver-6cb5984987-", Namespace:"calico-apiserver", SelfLink:"", UID:"7e12a227-9190-436c-a55f-74274779eb32", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 19, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cb5984987", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6cb5984987-cb6w2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7db0a90b3a3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:21:16.641871 containerd[1592]: 2026-01-16 21:21:16.483 [INFO][4944] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="cfe9d3a3878f3419059aebad06e0edfcdc1dcd3c9d06552472bf0a613e70545f" Namespace="calico-apiserver" Pod="calico-apiserver-6cb5984987-cb6w2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cb5984987--cb6w2-eth0" Jan 16 21:21:16.641871 containerd[1592]: 2026-01-16 21:21:16.483 [INFO][4944] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7db0a90b3a3 ContainerID="cfe9d3a3878f3419059aebad06e0edfcdc1dcd3c9d06552472bf0a613e70545f" Namespace="calico-apiserver" Pod="calico-apiserver-6cb5984987-cb6w2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cb5984987--cb6w2-eth0" Jan 16 21:21:16.641871 containerd[1592]: 2026-01-16 21:21:16.502 [INFO][4944] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cfe9d3a3878f3419059aebad06e0edfcdc1dcd3c9d06552472bf0a613e70545f" Namespace="calico-apiserver" Pod="calico-apiserver-6cb5984987-cb6w2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cb5984987--cb6w2-eth0" Jan 16 21:21:16.641871 containerd[1592]: 2026-01-16 21:21:16.503 [INFO][4944] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cfe9d3a3878f3419059aebad06e0edfcdc1dcd3c9d06552472bf0a613e70545f" Namespace="calico-apiserver" Pod="calico-apiserver-6cb5984987-cb6w2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cb5984987--cb6w2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cb5984987--cb6w2-eth0", GenerateName:"calico-apiserver-6cb5984987-", Namespace:"calico-apiserver", SelfLink:"", UID:"7e12a227-9190-436c-a55f-74274779eb32", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 19, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cb5984987", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cfe9d3a3878f3419059aebad06e0edfcdc1dcd3c9d06552472bf0a613e70545f", Pod:"calico-apiserver-6cb5984987-cb6w2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7db0a90b3a3", MAC:"8e:a6:23:62:d2:6e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:21:16.641871 containerd[1592]: 2026-01-16 21:21:16.576 [INFO][4944] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cfe9d3a3878f3419059aebad06e0edfcdc1dcd3c9d06552472bf0a613e70545f" Namespace="calico-apiserver" Pod="calico-apiserver-6cb5984987-cb6w2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cb5984987--cb6w2-eth0" Jan 16 21:21:16.733769 systemd-networkd[1517]: cali35f247a6010: Gained IPv6LL Jan 16 21:21:16.745833 systemd[1]: Started cri-containerd-5abe4b5e9016fb7bcc1b7f8790cbdb97ba2116c3e3b675cc2f39915a3a842acf.scope - libcontainer container 5abe4b5e9016fb7bcc1b7f8790cbdb97ba2116c3e3b675cc2f39915a3a842acf. Jan 16 21:21:16.851000 audit: BPF prog-id=221 op=LOAD Jan 16 21:21:16.853000 audit: BPF prog-id=222 op=LOAD Jan 16 21:21:16.853000 audit[5173]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=5144 pid=5173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:16.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561626534623565393031366662376263633162376638373930636264 Jan 16 21:21:16.861000 audit: BPF prog-id=222 op=UNLOAD Jan 16 21:21:16.861000 audit[5173]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5144 pid=5173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:16.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561626534623565393031366662376263633162376638373930636264 Jan 16 21:21:16.862000 audit: BPF prog-id=223 op=LOAD Jan 16 21:21:16.862000 audit[5173]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=5144 pid=5173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:16.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561626534623565393031366662376263633162376638373930636264 Jan 16 21:21:16.875000 audit: BPF prog-id=224 op=LOAD Jan 16 21:21:16.875000 audit[5173]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=5144 pid=5173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:16.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561626534623565393031366662376263633162376638373930636264 Jan 16 21:21:16.875000 audit: BPF prog-id=224 op=UNLOAD Jan 16 21:21:16.875000 audit[5173]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5144 pid=5173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:16.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561626534623565393031366662376263633162376638373930636264 Jan 16 21:21:16.875000 audit: BPF prog-id=223 op=UNLOAD Jan 16 21:21:16.875000 audit[5173]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5144 pid=5173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:16.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561626534623565393031366662376263633162376638373930636264 Jan 16 21:21:16.875000 audit: BPF prog-id=225 op=LOAD Jan 16 21:21:16.875000 audit[5173]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=5144 pid=5173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:16.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561626534623565393031366662376263633162376638373930636264 Jan 16 21:21:16.907706 kubelet[2889]: E0116 21:21:16.905241 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" podUID="ac06912f-e290-4031-a848-1392298fa9de" Jan 16 21:21:16.945998 kubelet[2889]: E0116 21:21:16.941978 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dgkw9" podUID="91f2e5a0-0976-4b7a-ac63-530715dff408" Jan 16 21:21:16.965517 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 16 21:21:17.050112 systemd-networkd[1517]: cali4f88e7e9b17: Link UP Jan 16 21:21:17.055212 systemd-networkd[1517]: cali4f88e7e9b17: Gained carrier Jan 16 21:21:17.160806 containerd[1592]: time="2026-01-16T21:21:17.158077276Z" level=info msg="connecting to shim cfe9d3a3878f3419059aebad06e0edfcdc1dcd3c9d06552472bf0a613e70545f" address="unix:///run/containerd/s/8745cc56fb26e80edf44ee0bb5615d2335fb19a866086157e0477085e38656f9" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:21:17.242847 containerd[1592]: 2026-01-16 21:21:15.452 [INFO][5069] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--j4h6j-eth0 coredns-66bc5c9577- kube-system 551adc74-4bae-43fc-8b67-656cbc70d543 1009 0 2026-01-16 21:19:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-j4h6j eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4f88e7e9b17 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="e2bf04b540c203bb7bccb7bf3ab624930dff0c170756317431c892cc1c735848" Namespace="kube-system" Pod="coredns-66bc5c9577-j4h6j" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--j4h6j-" Jan 16 21:21:17.242847 containerd[1592]: 2026-01-16 21:21:15.469 [INFO][5069] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e2bf04b540c203bb7bccb7bf3ab624930dff0c170756317431c892cc1c735848" Namespace="kube-system" Pod="coredns-66bc5c9577-j4h6j" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--j4h6j-eth0" Jan 16 21:21:17.242847 containerd[1592]: 2026-01-16 21:21:15.824 [INFO][5100] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e2bf04b540c203bb7bccb7bf3ab624930dff0c170756317431c892cc1c735848" HandleID="k8s-pod-network.e2bf04b540c203bb7bccb7bf3ab624930dff0c170756317431c892cc1c735848" Workload="localhost-k8s-coredns--66bc5c9577--j4h6j-eth0" Jan 16 21:21:17.242847 containerd[1592]: 2026-01-16 21:21:15.825 [INFO][5100] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e2bf04b540c203bb7bccb7bf3ab624930dff0c170756317431c892cc1c735848" HandleID="k8s-pod-network.e2bf04b540c203bb7bccb7bf3ab624930dff0c170756317431c892cc1c735848" Workload="localhost-k8s-coredns--66bc5c9577--j4h6j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df710), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-j4h6j", "timestamp":"2026-01-16 21:21:15.824130088 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 21:21:17.242847 containerd[1592]: 2026-01-16 21:21:15.826 [INFO][5100] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 21:21:17.242847 containerd[1592]: 2026-01-16 21:21:16.357 [INFO][5100] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 21:21:17.242847 containerd[1592]: 2026-01-16 21:21:16.357 [INFO][5100] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 16 21:21:17.242847 containerd[1592]: 2026-01-16 21:21:16.452 [INFO][5100] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e2bf04b540c203bb7bccb7bf3ab624930dff0c170756317431c892cc1c735848" host="localhost" Jan 16 21:21:17.242847 containerd[1592]: 2026-01-16 21:21:16.597 [INFO][5100] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 16 21:21:17.242847 containerd[1592]: 2026-01-16 21:21:16.715 [INFO][5100] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 16 21:21:17.242847 containerd[1592]: 2026-01-16 21:21:16.750 [INFO][5100] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 16 21:21:17.242847 containerd[1592]: 2026-01-16 21:21:16.818 [INFO][5100] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 16 21:21:17.242847 containerd[1592]: 2026-01-16 21:21:16.820 [INFO][5100] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e2bf04b540c203bb7bccb7bf3ab624930dff0c170756317431c892cc1c735848" host="localhost" Jan 16 21:21:17.242847 containerd[1592]: 2026-01-16 21:21:16.840 [INFO][5100] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e2bf04b540c203bb7bccb7bf3ab624930dff0c170756317431c892cc1c735848 Jan 16 21:21:17.242847 containerd[1592]: 2026-01-16 21:21:16.872 [INFO][5100] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e2bf04b540c203bb7bccb7bf3ab624930dff0c170756317431c892cc1c735848" host="localhost" Jan 16 21:21:17.242847 containerd[1592]: 2026-01-16 21:21:16.952 [INFO][5100] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e2bf04b540c203bb7bccb7bf3ab624930dff0c170756317431c892cc1c735848" host="localhost" Jan 16 21:21:17.242847 containerd[1592]: 2026-01-16 21:21:16.952 [INFO][5100] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e2bf04b540c203bb7bccb7bf3ab624930dff0c170756317431c892cc1c735848" host="localhost" Jan 16 21:21:17.242847 containerd[1592]: 2026-01-16 21:21:16.952 [INFO][5100] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 21:21:17.242847 containerd[1592]: 2026-01-16 21:21:16.952 [INFO][5100] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e2bf04b540c203bb7bccb7bf3ab624930dff0c170756317431c892cc1c735848" HandleID="k8s-pod-network.e2bf04b540c203bb7bccb7bf3ab624930dff0c170756317431c892cc1c735848" Workload="localhost-k8s-coredns--66bc5c9577--j4h6j-eth0" Jan 16 21:21:17.244793 containerd[1592]: 2026-01-16 21:21:17.010 [INFO][5069] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e2bf04b540c203bb7bccb7bf3ab624930dff0c170756317431c892cc1c735848" Namespace="kube-system" Pod="coredns-66bc5c9577-j4h6j" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--j4h6j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--j4h6j-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"551adc74-4bae-43fc-8b67-656cbc70d543", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 19, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-j4h6j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4f88e7e9b17", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:21:17.244793 containerd[1592]: 2026-01-16 21:21:17.011 [INFO][5069] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e2bf04b540c203bb7bccb7bf3ab624930dff0c170756317431c892cc1c735848" Namespace="kube-system" Pod="coredns-66bc5c9577-j4h6j" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--j4h6j-eth0" Jan 16 21:21:17.244793 containerd[1592]: 2026-01-16 21:21:17.011 [INFO][5069] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4f88e7e9b17 ContainerID="e2bf04b540c203bb7bccb7bf3ab624930dff0c170756317431c892cc1c735848" Namespace="kube-system" Pod="coredns-66bc5c9577-j4h6j" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--j4h6j-eth0" Jan 16 21:21:17.244793 containerd[1592]: 2026-01-16 21:21:17.069 [INFO][5069] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e2bf04b540c203bb7bccb7bf3ab624930dff0c170756317431c892cc1c735848" Namespace="kube-system" Pod="coredns-66bc5c9577-j4h6j" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--j4h6j-eth0" Jan 16 21:21:17.244793 containerd[1592]: 2026-01-16 21:21:17.079 [INFO][5069] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e2bf04b540c203bb7bccb7bf3ab624930dff0c170756317431c892cc1c735848" Namespace="kube-system" Pod="coredns-66bc5c9577-j4h6j" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--j4h6j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--j4h6j-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"551adc74-4bae-43fc-8b67-656cbc70d543", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 19, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e2bf04b540c203bb7bccb7bf3ab624930dff0c170756317431c892cc1c735848", Pod:"coredns-66bc5c9577-j4h6j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4f88e7e9b17", MAC:"fa:af:22:ba:6e:72", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:21:17.244793 containerd[1592]: 2026-01-16 21:21:17.229 [INFO][5069] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e2bf04b540c203bb7bccb7bf3ab624930dff0c170756317431c892cc1c735848" Namespace="kube-system" Pod="coredns-66bc5c9577-j4h6j" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--j4h6j-eth0" Jan 16 21:21:17.382000 audit[5244]: NETFILTER_CFG table=filter:130 family=2 entries=20 op=nft_register_rule pid=5244 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:17.382000 audit[5244]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe98f5cef0 a2=0 a3=7ffe98f5cedc items=0 ppid=3054 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:17.382000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:17.406000 audit[5244]: NETFILTER_CFG table=nat:131 family=2 entries=14 op=nft_register_rule pid=5244 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:17.406000 audit[5244]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe98f5cef0 a2=0 a3=0 items=0 ppid=3054 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:17.406000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:17.451000 audit[5252]: NETFILTER_CFG table=filter:132 family=2 entries=49 op=nft_register_chain pid=5252 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 21:21:17.451000 audit[5252]: SYSCALL arch=c000003e syscall=46 success=yes exit=25452 a0=3 a1=7ffd2211cab0 a2=0 a3=7ffd2211ca9c items=0 ppid=4613 pid=5252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:17.451000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 21:21:17.482548 containerd[1592]: time="2026-01-16T21:21:17.482214463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-s5mq8,Uid:0752a7cf-5d05-4249-a029-fa96ed25d7e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"5abe4b5e9016fb7bcc1b7f8790cbdb97ba2116c3e3b675cc2f39915a3a842acf\"" Jan 16 21:21:17.491923 kubelet[2889]: E0116 21:21:17.491227 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:17.534003 containerd[1592]: time="2026-01-16T21:21:17.533760362Z" level=info msg="CreateContainer within sandbox \"5abe4b5e9016fb7bcc1b7f8790cbdb97ba2116c3e3b675cc2f39915a3a842acf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 16 21:21:17.578770 systemd[1]: Started cri-containerd-cfe9d3a3878f3419059aebad06e0edfcdc1dcd3c9d06552472bf0a613e70545f.scope - libcontainer container cfe9d3a3878f3419059aebad06e0edfcdc1dcd3c9d06552472bf0a613e70545f. Jan 16 21:21:17.654000 audit[5285]: NETFILTER_CFG table=filter:133 family=2 entries=48 op=nft_register_chain pid=5285 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 21:21:17.654000 audit[5285]: SYSCALL arch=c000003e syscall=46 success=yes exit=22720 a0=3 a1=7ffd64ffe840 a2=0 a3=7ffd64ffe82c items=0 ppid=4613 pid=5285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:17.654000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 21:21:17.670755 containerd[1592]: time="2026-01-16T21:21:17.667744111Z" level=info msg="connecting to shim e2bf04b540c203bb7bccb7bf3ab624930dff0c170756317431c892cc1c735848" address="unix:///run/containerd/s/1b9377d0329826806971a6c2401343e482b3d5bc9263951326c252cb5f96ead3" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:21:17.735972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2552441194.mount: Deactivated successfully. Jan 16 21:21:17.752693 containerd[1592]: time="2026-01-16T21:21:17.752525919Z" level=info msg="Container aae5a0f0a772ed371ab151517cb931fa96e10a29b270d5c50f504f65afa5b017: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:21:17.764000 audit: BPF prog-id=226 op=LOAD Jan 16 21:21:17.771000 audit: BPF prog-id=227 op=LOAD Jan 16 21:21:17.771000 audit[5251]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=5214 pid=5251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:17.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366653964336133383738663334313930353961656261643036653065 Jan 16 21:21:17.771000 audit: BPF prog-id=227 op=UNLOAD Jan 16 21:21:17.771000 audit[5251]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5214 pid=5251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:17.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366653964336133383738663334313930353961656261643036653065 Jan 16 21:21:17.772000 audit: BPF prog-id=228 op=LOAD Jan 16 21:21:17.772000 audit[5251]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=5214 pid=5251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:17.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366653964336133383738663334313930353961656261643036653065 Jan 16 21:21:17.772000 audit: BPF prog-id=229 op=LOAD Jan 16 21:21:17.772000 audit[5251]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=5214 pid=5251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:17.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366653964336133383738663334313930353961656261643036653065 Jan 16 21:21:17.772000 audit: BPF prog-id=229 op=UNLOAD Jan 16 21:21:17.772000 audit[5251]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5214 pid=5251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:17.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366653964336133383738663334313930353961656261643036653065 Jan 16 21:21:17.772000 audit: BPF prog-id=228 op=UNLOAD Jan 16 21:21:17.772000 audit[5251]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5214 pid=5251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:17.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366653964336133383738663334313930353961656261643036653065 Jan 16 21:21:17.772000 audit: BPF prog-id=230 op=LOAD Jan 16 21:21:17.772000 audit[5251]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=5214 pid=5251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:17.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366653964336133383738663334313930353961656261643036653065 Jan 16 21:21:17.805463 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 16 21:21:17.822832 systemd-networkd[1517]: cali7db0a90b3a3: Gained IPv6LL Jan 16 21:21:17.833466 containerd[1592]: time="2026-01-16T21:21:17.832573424Z" level=info msg="CreateContainer within sandbox \"5abe4b5e9016fb7bcc1b7f8790cbdb97ba2116c3e3b675cc2f39915a3a842acf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aae5a0f0a772ed371ab151517cb931fa96e10a29b270d5c50f504f65afa5b017\"" Jan 16 21:21:17.836945 containerd[1592]: time="2026-01-16T21:21:17.833964165Z" level=info msg="StartContainer for \"aae5a0f0a772ed371ab151517cb931fa96e10a29b270d5c50f504f65afa5b017\"" Jan 16 21:21:17.837485 containerd[1592]: time="2026-01-16T21:21:17.837244735Z" level=info msg="connecting to shim aae5a0f0a772ed371ab151517cb931fa96e10a29b270d5c50f504f65afa5b017" address="unix:///run/containerd/s/545efd2d929e1f50ccd0927e20537327e6cade78a510c7d32118bfc74d3f73c7" protocol=ttrpc version=3 Jan 16 21:21:17.918211 systemd[1]: Started cri-containerd-aae5a0f0a772ed371ab151517cb931fa96e10a29b270d5c50f504f65afa5b017.scope - libcontainer container aae5a0f0a772ed371ab151517cb931fa96e10a29b270d5c50f504f65afa5b017. Jan 16 21:21:17.947522 kubelet[2889]: E0116 21:21:17.947476 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" podUID="ac06912f-e290-4031-a848-1392298fa9de" Jan 16 21:21:18.042000 audit: BPF prog-id=231 op=LOAD Jan 16 21:21:18.067000 audit: BPF prog-id=232 op=LOAD Jan 16 21:21:18.067000 audit[5314]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5144 pid=5314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:18.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161653561306630613737326564333731616231353135313763623933 Jan 16 21:21:18.067000 audit: BPF prog-id=232 op=UNLOAD Jan 16 21:21:18.067000 audit[5314]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5144 pid=5314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:18.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161653561306630613737326564333731616231353135313763623933 Jan 16 21:21:18.073000 audit: BPF prog-id=233 op=LOAD Jan 16 21:21:18.073000 audit[5314]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5144 pid=5314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:18.073000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161653561306630613737326564333731616231353135313763623933 Jan 16 21:21:18.078000 audit: BPF prog-id=234 op=LOAD Jan 16 21:21:18.078000 audit[5314]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5144 pid=5314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:18.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161653561306630613737326564333731616231353135313763623933 Jan 16 21:21:18.078000 audit: BPF prog-id=234 op=UNLOAD Jan 16 21:21:18.078000 audit[5314]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5144 pid=5314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:18.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161653561306630613737326564333731616231353135313763623933 Jan 16 21:21:18.078000 audit: BPF prog-id=233 op=UNLOAD Jan 16 21:21:18.078000 audit[5314]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5144 pid=5314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:18.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161653561306630613737326564333731616231353135313763623933 Jan 16 21:21:18.078000 audit: BPF prog-id=235 op=LOAD Jan 16 21:21:18.078000 audit[5314]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5144 pid=5314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:18.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161653561306630613737326564333731616231353135313763623933 Jan 16 21:21:18.318168 systemd[1]: Started cri-containerd-e2bf04b540c203bb7bccb7bf3ab624930dff0c170756317431c892cc1c735848.scope - libcontainer container e2bf04b540c203bb7bccb7bf3ab624930dff0c170756317431c892cc1c735848. Jan 16 21:21:18.356869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2566649193.mount: Deactivated successfully. Jan 16 21:21:18.419482 containerd[1592]: time="2026-01-16T21:21:18.419233321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cb5984987-cb6w2,Uid:7e12a227-9190-436c-a55f-74274779eb32,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"cfe9d3a3878f3419059aebad06e0edfcdc1dcd3c9d06552472bf0a613e70545f\"" Jan 16 21:21:18.440046 containerd[1592]: time="2026-01-16T21:21:18.440000346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 21:21:18.463000 audit: BPF prog-id=236 op=LOAD Jan 16 21:21:18.473000 audit: BPF prog-id=237 op=LOAD Jan 16 21:21:18.473000 audit[5308]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001f0238 a2=98 a3=0 items=0 ppid=5291 pid=5308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:18.473000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532626630346235343063323033626237626363623762663361623632 Jan 16 21:21:18.473000 audit: BPF prog-id=237 op=UNLOAD Jan 16 21:21:18.473000 audit[5308]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5291 pid=5308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:18.473000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532626630346235343063323033626237626363623762663361623632 Jan 16 21:21:18.473000 audit: BPF prog-id=238 op=LOAD Jan 16 21:21:18.473000 audit[5308]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001f0488 a2=98 a3=0 items=0 ppid=5291 pid=5308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:18.473000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532626630346235343063323033626237626363623762663361623632 Jan 16 21:21:18.473000 audit: BPF prog-id=239 op=LOAD Jan 16 21:21:18.473000 audit[5308]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001f0218 a2=98 a3=0 items=0 ppid=5291 pid=5308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:18.473000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532626630346235343063323033626237626363623762663361623632 Jan 16 21:21:18.473000 audit: BPF prog-id=239 op=UNLOAD Jan 16 21:21:18.473000 audit[5308]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5291 pid=5308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:18.473000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532626630346235343063323033626237626363623762663361623632 Jan 16 21:21:18.473000 audit: BPF prog-id=238 op=UNLOAD Jan 16 21:21:18.473000 audit[5308]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5291 pid=5308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:18.473000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532626630346235343063323033626237626363623762663361623632 Jan 16 21:21:18.474000 audit: BPF prog-id=240 op=LOAD Jan 16 21:21:18.474000 audit[5308]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001f06e8 a2=98 a3=0 items=0 ppid=5291 pid=5308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:18.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532626630346235343063323033626237626363623762663361623632 Jan 16 21:21:18.482095 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 16 21:21:18.549849 containerd[1592]: time="2026-01-16T21:21:18.549518022Z" level=info msg="StartContainer for \"aae5a0f0a772ed371ab151517cb931fa96e10a29b270d5c50f504f65afa5b017\" returns successfully" Jan 16 21:21:18.560037 containerd[1592]: time="2026-01-16T21:21:18.559695678Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:21:18.571916 containerd[1592]: time="2026-01-16T21:21:18.570990705Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 21:21:18.571916 containerd[1592]: time="2026-01-16T21:21:18.571119596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 21:21:18.572128 kubelet[2889]: E0116 21:21:18.571518 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:21:18.572128 kubelet[2889]: E0116 21:21:18.571700 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:21:18.572128 kubelet[2889]: E0116 21:21:18.571807 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6cb5984987-cb6w2_calico-apiserver(7e12a227-9190-436c-a55f-74274779eb32): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 21:21:18.572128 kubelet[2889]: E0116 21:21:18.571847 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" podUID="7e12a227-9190-436c-a55f-74274779eb32" Jan 16 21:21:18.719972 systemd-networkd[1517]: cali4f88e7e9b17: Gained IPv6LL Jan 16 21:21:18.778142 systemd-networkd[1517]: cali757e65142c9: Link UP Jan 16 21:21:18.781895 systemd-networkd[1517]: cali757e65142c9: Gained carrier Jan 16 21:21:18.819132 containerd[1592]: time="2026-01-16T21:21:18.818466917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-j4h6j,Uid:551adc74-4bae-43fc-8b67-656cbc70d543,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2bf04b540c203bb7bccb7bf3ab624930dff0c170756317431c892cc1c735848\"" Jan 16 21:21:18.834219 kubelet[2889]: E0116 21:21:18.822915 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:18.869005 containerd[1592]: time="2026-01-16T21:21:18.868477682Z" level=info msg="CreateContainer within sandbox \"e2bf04b540c203bb7bccb7bf3ab624930dff0c170756317431c892cc1c735848\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 16 21:21:18.896740 containerd[1592]: 2026-01-16 21:21:17.224 [INFO][5148] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--6hngd-eth0 csi-node-driver- calico-system b99000d7-a136-4299-82d0-76fa7e3c28f2 835 0 2026-01-16 21:19:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-6hngd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali757e65142c9 [] [] }} ContainerID="fa7c474d0db17b29d43ec5526d244107d62bf600c4e179b8814d2cbacd182e34" Namespace="calico-system" Pod="csi-node-driver-6hngd" WorkloadEndpoint="localhost-k8s-csi--node--driver--6hngd-" Jan 16 21:21:18.896740 containerd[1592]: 2026-01-16 21:21:17.242 [INFO][5148] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fa7c474d0db17b29d43ec5526d244107d62bf600c4e179b8814d2cbacd182e34" Namespace="calico-system" Pod="csi-node-driver-6hngd" WorkloadEndpoint="localhost-k8s-csi--node--driver--6hngd-eth0" Jan 16 21:21:18.896740 containerd[1592]: 2026-01-16 21:21:17.813 [INFO][5249] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fa7c474d0db17b29d43ec5526d244107d62bf600c4e179b8814d2cbacd182e34" HandleID="k8s-pod-network.fa7c474d0db17b29d43ec5526d244107d62bf600c4e179b8814d2cbacd182e34" Workload="localhost-k8s-csi--node--driver--6hngd-eth0" Jan 16 21:21:18.896740 containerd[1592]: 2026-01-16 21:21:17.813 [INFO][5249] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fa7c474d0db17b29d43ec5526d244107d62bf600c4e179b8814d2cbacd182e34" HandleID="k8s-pod-network.fa7c474d0db17b29d43ec5526d244107d62bf600c4e179b8814d2cbacd182e34" Workload="localhost-k8s-csi--node--driver--6hngd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e580), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-6hngd", "timestamp":"2026-01-16 21:21:17.813726859 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 21:21:18.896740 containerd[1592]: 2026-01-16 21:21:17.814 [INFO][5249] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 21:21:18.896740 containerd[1592]: 2026-01-16 21:21:17.814 [INFO][5249] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 21:21:18.896740 containerd[1592]: 2026-01-16 21:21:17.887 [INFO][5249] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 16 21:21:18.896740 containerd[1592]: 2026-01-16 21:21:18.051 [INFO][5249] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fa7c474d0db17b29d43ec5526d244107d62bf600c4e179b8814d2cbacd182e34" host="localhost" Jan 16 21:21:18.896740 containerd[1592]: 2026-01-16 21:21:18.301 [INFO][5249] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 16 21:21:18.896740 containerd[1592]: 2026-01-16 21:21:18.393 [INFO][5249] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 16 21:21:18.896740 containerd[1592]: 2026-01-16 21:21:18.413 [INFO][5249] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 16 21:21:18.896740 containerd[1592]: 2026-01-16 21:21:18.427 [INFO][5249] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 16 21:21:18.896740 containerd[1592]: 2026-01-16 21:21:18.427 [INFO][5249] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fa7c474d0db17b29d43ec5526d244107d62bf600c4e179b8814d2cbacd182e34" host="localhost" Jan 16 21:21:18.896740 containerd[1592]: 2026-01-16 21:21:18.445 [INFO][5249] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fa7c474d0db17b29d43ec5526d244107d62bf600c4e179b8814d2cbacd182e34 Jan 16 21:21:18.896740 containerd[1592]: 2026-01-16 21:21:18.509 [INFO][5249] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fa7c474d0db17b29d43ec5526d244107d62bf600c4e179b8814d2cbacd182e34" host="localhost" Jan 16 21:21:18.896740 containerd[1592]: 2026-01-16 21:21:18.595 [INFO][5249] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.fa7c474d0db17b29d43ec5526d244107d62bf600c4e179b8814d2cbacd182e34" host="localhost" Jan 16 21:21:18.896740 containerd[1592]: 2026-01-16 21:21:18.596 [INFO][5249] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.fa7c474d0db17b29d43ec5526d244107d62bf600c4e179b8814d2cbacd182e34" host="localhost" Jan 16 21:21:18.896740 containerd[1592]: 2026-01-16 21:21:18.602 [INFO][5249] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 21:21:18.896740 containerd[1592]: 2026-01-16 21:21:18.602 [INFO][5249] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="fa7c474d0db17b29d43ec5526d244107d62bf600c4e179b8814d2cbacd182e34" HandleID="k8s-pod-network.fa7c474d0db17b29d43ec5526d244107d62bf600c4e179b8814d2cbacd182e34" Workload="localhost-k8s-csi--node--driver--6hngd-eth0" Jan 16 21:21:18.898882 containerd[1592]: 2026-01-16 21:21:18.665 [INFO][5148] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fa7c474d0db17b29d43ec5526d244107d62bf600c4e179b8814d2cbacd182e34" Namespace="calico-system" Pod="csi-node-driver-6hngd" WorkloadEndpoint="localhost-k8s-csi--node--driver--6hngd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6hngd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b99000d7-a136-4299-82d0-76fa7e3c28f2", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 19, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-6hngd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali757e65142c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:21:18.898882 containerd[1592]: 2026-01-16 21:21:18.735 [INFO][5148] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="fa7c474d0db17b29d43ec5526d244107d62bf600c4e179b8814d2cbacd182e34" Namespace="calico-system" Pod="csi-node-driver-6hngd" WorkloadEndpoint="localhost-k8s-csi--node--driver--6hngd-eth0" Jan 16 21:21:18.898882 containerd[1592]: 2026-01-16 21:21:18.736 [INFO][5148] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali757e65142c9 ContainerID="fa7c474d0db17b29d43ec5526d244107d62bf600c4e179b8814d2cbacd182e34" Namespace="calico-system" Pod="csi-node-driver-6hngd" WorkloadEndpoint="localhost-k8s-csi--node--driver--6hngd-eth0" Jan 16 21:21:18.898882 containerd[1592]: 2026-01-16 21:21:18.784 [INFO][5148] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fa7c474d0db17b29d43ec5526d244107d62bf600c4e179b8814d2cbacd182e34" Namespace="calico-system" Pod="csi-node-driver-6hngd" WorkloadEndpoint="localhost-k8s-csi--node--driver--6hngd-eth0" Jan 16 21:21:18.898882 containerd[1592]: 2026-01-16 21:21:18.798 [INFO][5148] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fa7c474d0db17b29d43ec5526d244107d62bf600c4e179b8814d2cbacd182e34" Namespace="calico-system" Pod="csi-node-driver-6hngd" WorkloadEndpoint="localhost-k8s-csi--node--driver--6hngd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6hngd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b99000d7-a136-4299-82d0-76fa7e3c28f2", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 19, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fa7c474d0db17b29d43ec5526d244107d62bf600c4e179b8814d2cbacd182e34", Pod:"csi-node-driver-6hngd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali757e65142c9", MAC:"42:54:02:13:2f:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:21:18.898882 containerd[1592]: 2026-01-16 21:21:18.867 [INFO][5148] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fa7c474d0db17b29d43ec5526d244107d62bf600c4e179b8814d2cbacd182e34" Namespace="calico-system" Pod="csi-node-driver-6hngd" WorkloadEndpoint="localhost-k8s-csi--node--driver--6hngd-eth0" Jan 16 21:21:18.952056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3007683669.mount: Deactivated successfully. Jan 16 21:21:19.022930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4199751328.mount: Deactivated successfully. Jan 16 21:21:19.215700 containerd[1592]: time="2026-01-16T21:21:19.214226089Z" level=info msg="Container c77fb3c8ed45ed1d1409152c81e193d63a18d160a450d09fa041692badc667fb: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:21:19.308153 kubelet[2889]: E0116 21:21:19.305781 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:19.322168 containerd[1592]: time="2026-01-16T21:21:19.317764331Z" level=info msg="CreateContainer within sandbox \"e2bf04b540c203bb7bccb7bf3ab624930dff0c170756317431c892cc1c735848\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c77fb3c8ed45ed1d1409152c81e193d63a18d160a450d09fa041692badc667fb\"" Jan 16 21:21:19.331728 containerd[1592]: time="2026-01-16T21:21:19.330918200Z" level=info msg="StartContainer for \"c77fb3c8ed45ed1d1409152c81e193d63a18d160a450d09fa041692badc667fb\"" Jan 16 21:21:19.338862 kubelet[2889]: E0116 21:21:19.338111 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" podUID="7e12a227-9190-436c-a55f-74274779eb32" Jan 16 21:21:19.354000 audit[5394]: NETFILTER_CFG table=filter:134 family=2 entries=62 op=nft_register_chain pid=5394 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 21:21:19.354000 audit[5394]: SYSCALL arch=c000003e syscall=46 success=yes exit=28368 a0=3 a1=7ffd66cc8060 a2=0 a3=7ffd66cc804c items=0 ppid=4613 pid=5394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:19.354000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 21:21:19.392719 containerd[1592]: time="2026-01-16T21:21:19.391122313Z" level=info msg="connecting to shim c77fb3c8ed45ed1d1409152c81e193d63a18d160a450d09fa041692badc667fb" address="unix:///run/containerd/s/1b9377d0329826806971a6c2401343e482b3d5bc9263951326c252cb5f96ead3" protocol=ttrpc version=3 Jan 16 21:21:19.450038 containerd[1592]: time="2026-01-16T21:21:19.449976268Z" level=info msg="connecting to shim fa7c474d0db17b29d43ec5526d244107d62bf600c4e179b8814d2cbacd182e34" address="unix:///run/containerd/s/47cb9027d0d9de0fb5958620be90e87ab4533882e148e7cb8e8653fd39318f91" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:21:19.458097 kubelet[2889]: I0116 21:21:19.458033 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-s5mq8" podStartSLOduration=138.458011274 podStartE2EDuration="2m18.458011274s" podCreationTimestamp="2026-01-16 21:19:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 21:21:19.452219196 +0000 UTC m=+144.010522167" watchObservedRunningTime="2026-01-16 21:21:19.458011274 +0000 UTC m=+144.016314235" Jan 16 21:21:19.580104 systemd-networkd[1517]: cali0f6a3072d97: Link UP Jan 16 21:21:19.588804 systemd-networkd[1517]: cali0f6a3072d97: Gained carrier Jan 16 21:21:19.692984 systemd[1]: Started cri-containerd-fa7c474d0db17b29d43ec5526d244107d62bf600c4e179b8814d2cbacd182e34.scope - libcontainer container fa7c474d0db17b29d43ec5526d244107d62bf600c4e179b8814d2cbacd182e34. Jan 16 21:21:19.696955 containerd[1592]: 2026-01-16 21:21:17.009 [INFO][5140] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7c7c579b5f--dsr85-eth0 calico-kube-controllers-7c7c579b5f- calico-system f80ed623-af1a-45e7-a125-0c7c2229f592 992 0 2026-01-16 21:19:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7c7c579b5f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7c7c579b5f-dsr85 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0f6a3072d97 [] [] }} ContainerID="feab9dc0a489c601c14c4aa1a3d7837f546ddd5d189163bd26b972f63a672083" Namespace="calico-system" Pod="calico-kube-controllers-7c7c579b5f-dsr85" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c7c579b5f--dsr85-" Jan 16 21:21:19.696955 containerd[1592]: 2026-01-16 21:21:17.026 [INFO][5140] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="feab9dc0a489c601c14c4aa1a3d7837f546ddd5d189163bd26b972f63a672083" Namespace="calico-system" Pod="calico-kube-controllers-7c7c579b5f-dsr85" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c7c579b5f--dsr85-eth0" Jan 16 21:21:19.696955 containerd[1592]: 2026-01-16 21:21:18.034 [INFO][5228] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="feab9dc0a489c601c14c4aa1a3d7837f546ddd5d189163bd26b972f63a672083" HandleID="k8s-pod-network.feab9dc0a489c601c14c4aa1a3d7837f546ddd5d189163bd26b972f63a672083" Workload="localhost-k8s-calico--kube--controllers--7c7c579b5f--dsr85-eth0" Jan 16 21:21:19.696955 containerd[1592]: 2026-01-16 21:21:18.058 [INFO][5228] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="feab9dc0a489c601c14c4aa1a3d7837f546ddd5d189163bd26b972f63a672083" HandleID="k8s-pod-network.feab9dc0a489c601c14c4aa1a3d7837f546ddd5d189163bd26b972f63a672083" Workload="localhost-k8s-calico--kube--controllers--7c7c579b5f--dsr85-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002a45e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7c7c579b5f-dsr85", "timestamp":"2026-01-16 21:21:18.034143243 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 21:21:19.696955 containerd[1592]: 2026-01-16 21:21:18.059 [INFO][5228] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 21:21:19.696955 containerd[1592]: 2026-01-16 21:21:18.603 [INFO][5228] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 21:21:19.696955 containerd[1592]: 2026-01-16 21:21:18.605 [INFO][5228] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 16 21:21:19.696955 containerd[1592]: 2026-01-16 21:21:18.741 [INFO][5228] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.feab9dc0a489c601c14c4aa1a3d7837f546ddd5d189163bd26b972f63a672083" host="localhost" Jan 16 21:21:19.696955 containerd[1592]: 2026-01-16 21:21:18.894 [INFO][5228] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 16 21:21:19.696955 containerd[1592]: 2026-01-16 21:21:18.999 [INFO][5228] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 16 21:21:19.696955 containerd[1592]: 2026-01-16 21:21:19.070 [INFO][5228] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 16 21:21:19.696955 containerd[1592]: 2026-01-16 21:21:19.323 [INFO][5228] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 16 21:21:19.696955 containerd[1592]: 2026-01-16 21:21:19.323 [INFO][5228] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.feab9dc0a489c601c14c4aa1a3d7837f546ddd5d189163bd26b972f63a672083" host="localhost" Jan 16 21:21:19.696955 containerd[1592]: 2026-01-16 21:21:19.359 [INFO][5228] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.feab9dc0a489c601c14c4aa1a3d7837f546ddd5d189163bd26b972f63a672083 Jan 16 21:21:19.696955 containerd[1592]: 2026-01-16 21:21:19.424 [INFO][5228] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.feab9dc0a489c601c14c4aa1a3d7837f546ddd5d189163bd26b972f63a672083" host="localhost" Jan 16 21:21:19.696955 containerd[1592]: 2026-01-16 21:21:19.515 [INFO][5228] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.feab9dc0a489c601c14c4aa1a3d7837f546ddd5d189163bd26b972f63a672083" host="localhost" Jan 16 21:21:19.696955 containerd[1592]: 2026-01-16 21:21:19.515 [INFO][5228] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.feab9dc0a489c601c14c4aa1a3d7837f546ddd5d189163bd26b972f63a672083" host="localhost" Jan 16 21:21:19.696955 containerd[1592]: 2026-01-16 21:21:19.515 [INFO][5228] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 21:21:19.696955 containerd[1592]: 2026-01-16 21:21:19.515 [INFO][5228] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="feab9dc0a489c601c14c4aa1a3d7837f546ddd5d189163bd26b972f63a672083" HandleID="k8s-pod-network.feab9dc0a489c601c14c4aa1a3d7837f546ddd5d189163bd26b972f63a672083" Workload="localhost-k8s-calico--kube--controllers--7c7c579b5f--dsr85-eth0" Jan 16 21:21:19.699171 containerd[1592]: 2026-01-16 21:21:19.532 [INFO][5140] cni-plugin/k8s.go 418: Populated endpoint ContainerID="feab9dc0a489c601c14c4aa1a3d7837f546ddd5d189163bd26b972f63a672083" Namespace="calico-system" Pod="calico-kube-controllers-7c7c579b5f-dsr85" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c7c579b5f--dsr85-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7c7c579b5f--dsr85-eth0", GenerateName:"calico-kube-controllers-7c7c579b5f-", Namespace:"calico-system", SelfLink:"", UID:"f80ed623-af1a-45e7-a125-0c7c2229f592", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 19, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c7c579b5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7c7c579b5f-dsr85", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0f6a3072d97", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:21:19.699171 containerd[1592]: 2026-01-16 21:21:19.532 [INFO][5140] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="feab9dc0a489c601c14c4aa1a3d7837f546ddd5d189163bd26b972f63a672083" Namespace="calico-system" Pod="calico-kube-controllers-7c7c579b5f-dsr85" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c7c579b5f--dsr85-eth0" Jan 16 21:21:19.699171 containerd[1592]: 2026-01-16 21:21:19.532 [INFO][5140] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f6a3072d97 ContainerID="feab9dc0a489c601c14c4aa1a3d7837f546ddd5d189163bd26b972f63a672083" Namespace="calico-system" Pod="calico-kube-controllers-7c7c579b5f-dsr85" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c7c579b5f--dsr85-eth0" Jan 16 21:21:19.699171 containerd[1592]: 2026-01-16 21:21:19.598 [INFO][5140] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="feab9dc0a489c601c14c4aa1a3d7837f546ddd5d189163bd26b972f63a672083" Namespace="calico-system" Pod="calico-kube-controllers-7c7c579b5f-dsr85" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c7c579b5f--dsr85-eth0" Jan 16 21:21:19.699171 containerd[1592]: 2026-01-16 21:21:19.599 [INFO][5140] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="feab9dc0a489c601c14c4aa1a3d7837f546ddd5d189163bd26b972f63a672083" Namespace="calico-system" Pod="calico-kube-controllers-7c7c579b5f-dsr85" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c7c579b5f--dsr85-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7c7c579b5f--dsr85-eth0", GenerateName:"calico-kube-controllers-7c7c579b5f-", Namespace:"calico-system", SelfLink:"", UID:"f80ed623-af1a-45e7-a125-0c7c2229f592", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 19, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c7c579b5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"feab9dc0a489c601c14c4aa1a3d7837f546ddd5d189163bd26b972f63a672083", Pod:"calico-kube-controllers-7c7c579b5f-dsr85", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0f6a3072d97", MAC:"e6:d2:eb:d2:46:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:21:19.699171 containerd[1592]: 2026-01-16 21:21:19.650 [INFO][5140] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="feab9dc0a489c601c14c4aa1a3d7837f546ddd5d189163bd26b972f63a672083" Namespace="calico-system" Pod="calico-kube-controllers-7c7c579b5f-dsr85" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c7c579b5f--dsr85-eth0" Jan 16 21:21:19.760574 systemd[1]: Started cri-containerd-c77fb3c8ed45ed1d1409152c81e193d63a18d160a450d09fa041692badc667fb.scope - libcontainer container c77fb3c8ed45ed1d1409152c81e193d63a18d160a450d09fa041692badc667fb. Jan 16 21:21:19.856000 audit[5455]: NETFILTER_CFG table=filter:135 family=2 entries=20 op=nft_register_rule pid=5455 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:19.856000 audit[5455]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffede617b60 a2=0 a3=7ffede617b4c items=0 ppid=3054 pid=5455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:19.856000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:19.881000 audit[5455]: NETFILTER_CFG table=nat:136 family=2 entries=14 op=nft_register_rule pid=5455 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:19.881000 audit[5455]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffede617b60 a2=0 a3=0 items=0 ppid=3054 pid=5455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:19.881000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:19.913000 audit[5454]: NETFILTER_CFG table=filter:137 family=2 entries=56 op=nft_register_chain pid=5454 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 21:21:19.913000 audit[5454]: SYSCALL arch=c000003e syscall=46 success=yes exit=25500 a0=3 a1=7ffd6f409bc0 a2=0 a3=7ffd6f409bac items=0 ppid=4613 pid=5454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:19.913000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 21:21:19.923000 audit: BPF prog-id=241 op=LOAD Jan 16 21:21:19.936000 audit: BPF prog-id=242 op=LOAD Jan 16 21:21:19.936000 audit[5410]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=5291 pid=5410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:19.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337376662336338656434356564316431343039313532633831653139 Jan 16 21:21:19.936000 audit: BPF prog-id=242 op=UNLOAD Jan 16 21:21:19.936000 audit[5410]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5291 pid=5410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:19.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337376662336338656434356564316431343039313532633831653139 Jan 16 21:21:19.939865 containerd[1592]: time="2026-01-16T21:21:19.939556885Z" level=info msg="connecting to shim feab9dc0a489c601c14c4aa1a3d7837f546ddd5d189163bd26b972f63a672083" address="unix:///run/containerd/s/ccc4a53685a310cb587b8b23ad1f263a4f91311676daecc6136ee8082841adcb" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:21:19.940000 audit: BPF prog-id=243 op=LOAD Jan 16 21:21:19.940000 audit[5410]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=5291 pid=5410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:19.940000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337376662336338656434356564316431343039313532633831653139 Jan 16 21:21:19.940000 audit: BPF prog-id=244 op=LOAD Jan 16 21:21:19.940000 audit[5410]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=5291 pid=5410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:19.940000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337376662336338656434356564316431343039313532633831653139 Jan 16 21:21:19.940000 audit: BPF prog-id=244 op=UNLOAD Jan 16 21:21:19.940000 audit[5410]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5291 pid=5410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:19.940000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337376662336338656434356564316431343039313532633831653139 Jan 16 21:21:19.940000 audit: BPF prog-id=243 op=UNLOAD Jan 16 21:21:19.940000 audit[5410]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5291 pid=5410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:19.940000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337376662336338656434356564316431343039313532633831653139 Jan 16 21:21:19.940000 audit: BPF prog-id=245 op=LOAD Jan 16 21:21:19.940000 audit[5410]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=5291 pid=5410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:19.940000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337376662336338656434356564316431343039313532633831653139 Jan 16 21:21:19.973000 audit: BPF prog-id=246 op=LOAD Jan 16 21:21:19.975000 audit: BPF prog-id=247 op=LOAD Jan 16 21:21:19.975000 audit[5413]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228238 a2=98 a3=0 items=0 ppid=5396 pid=5413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:19.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661376334373464306462313762323964343365633535323664323434 Jan 16 21:21:19.975000 audit: BPF prog-id=247 op=UNLOAD Jan 16 21:21:19.975000 audit[5413]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5396 pid=5413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:19.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661376334373464306462313762323964343365633535323664323434 Jan 16 21:21:19.977000 audit: BPF prog-id=248 op=LOAD Jan 16 21:21:19.977000 audit[5413]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228488 a2=98 a3=0 items=0 ppid=5396 pid=5413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:19.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661376334373464306462313762323964343365633535323664323434 Jan 16 21:21:19.979000 audit: BPF prog-id=249 op=LOAD Jan 16 21:21:19.979000 audit[5413]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000228218 a2=98 a3=0 items=0 ppid=5396 pid=5413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:19.979000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661376334373464306462313762323964343365633535323664323434 Jan 16 21:21:19.979000 audit: BPF prog-id=249 op=UNLOAD Jan 16 21:21:19.979000 audit[5413]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5396 pid=5413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:19.979000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661376334373464306462313762323964343365633535323664323434 Jan 16 21:21:19.979000 audit: BPF prog-id=248 op=UNLOAD Jan 16 21:21:19.979000 audit[5413]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5396 pid=5413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:19.979000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661376334373464306462313762323964343365633535323664323434 Jan 16 21:21:19.979000 audit: BPF prog-id=250 op=LOAD Jan 16 21:21:19.979000 audit[5413]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002286e8 a2=98 a3=0 items=0 ppid=5396 pid=5413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:19.979000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661376334373464306462313762323964343365633535323664323434 Jan 16 21:21:20.010467 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 16 21:21:20.189728 containerd[1592]: time="2026-01-16T21:21:20.189565985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6hngd,Uid:b99000d7-a136-4299-82d0-76fa7e3c28f2,Namespace:calico-system,Attempt:0,} returns sandbox id \"fa7c474d0db17b29d43ec5526d244107d62bf600c4e179b8814d2cbacd182e34\"" Jan 16 21:21:20.213171 containerd[1592]: time="2026-01-16T21:21:20.213076587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 21:21:20.220973 systemd[1]: Started cri-containerd-feab9dc0a489c601c14c4aa1a3d7837f546ddd5d189163bd26b972f63a672083.scope - libcontainer container feab9dc0a489c601c14c4aa1a3d7837f546ddd5d189163bd26b972f63a672083. Jan 16 21:21:20.254991 containerd[1592]: time="2026-01-16T21:21:20.254950233Z" level=info msg="StartContainer for \"c77fb3c8ed45ed1d1409152c81e193d63a18d160a450d09fa041692badc667fb\" returns successfully" Jan 16 21:21:20.304525 containerd[1592]: time="2026-01-16T21:21:20.304454614Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:21:20.316180 containerd[1592]: time="2026-01-16T21:21:20.315248790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 16 21:21:20.316892 containerd[1592]: time="2026-01-16T21:21:20.316559701Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 21:21:20.319896 kubelet[2889]: E0116 21:21:20.318055 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 21:21:20.319896 kubelet[2889]: E0116 21:21:20.318111 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 21:21:20.319896 kubelet[2889]: E0116 21:21:20.318195 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-6hngd_calico-system(b99000d7-a136-4299-82d0-76fa7e3c28f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 21:21:20.335555 containerd[1592]: time="2026-01-16T21:21:20.334502877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 21:21:20.455699 containerd[1592]: time="2026-01-16T21:21:20.454877317Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:21:20.469823 containerd[1592]: time="2026-01-16T21:21:20.468907101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 16 21:21:20.469823 containerd[1592]: time="2026-01-16T21:21:20.462914389Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 21:21:20.470882 kubelet[2889]: E0116 21:21:20.470524 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 21:21:20.470882 kubelet[2889]: E0116 21:21:20.470805 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 21:21:20.470998 kubelet[2889]: E0116 21:21:20.470909 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-6hngd_calico-system(b99000d7-a136-4299-82d0-76fa7e3c28f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 21:21:20.471093 kubelet[2889]: E0116 21:21:20.470980 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:21:20.480219 kubelet[2889]: E0116 21:21:20.480189 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:20.501486 kubelet[2889]: E0116 21:21:20.501052 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:20.502200 kubelet[2889]: E0116 21:21:20.502160 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" podUID="7e12a227-9190-436c-a55f-74274779eb32" Jan 16 21:21:20.510701 kubelet[2889]: E0116 21:21:20.510020 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:21:20.608000 audit: BPF prog-id=251 op=LOAD Jan 16 21:21:20.641695 kernel: kauditd_printk_skb: 177 callbacks suppressed Jan 16 21:21:20.641820 kernel: audit: type=1334 audit(1768598480.608:726): prog-id=251 op=LOAD Jan 16 21:21:20.641504 systemd-networkd[1517]: cali757e65142c9: Gained IPv6LL Jan 16 21:21:20.636000 audit: BPF prog-id=252 op=LOAD Jan 16 21:21:20.676871 kernel: audit: type=1334 audit(1768598480.636:727): prog-id=252 op=LOAD Jan 16 21:21:20.636000 audit[5482]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0000b2238 a2=98 a3=0 items=0 ppid=5471 pid=5482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:20.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665616239646330613438396336303163313463346161316133643738 Jan 16 21:21:20.636000 audit: BPF prog-id=252 op=UNLOAD Jan 16 21:21:20.636000 audit[5482]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5471 pid=5482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:20.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665616239646330613438396336303163313463346161316133643738 Jan 16 21:21:20.636000 audit: BPF prog-id=253 op=LOAD Jan 16 21:21:20.636000 audit[5482]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0000b2488 a2=98 a3=0 items=0 ppid=5471 pid=5482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:20.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665616239646330613438396336303163313463346161316133643738 Jan 16 21:21:20.636000 audit: BPF prog-id=254 op=LOAD Jan 16 21:21:20.678729 kernel: audit: type=1300 audit(1768598480.636:727): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0000b2238 a2=98 a3=0 items=0 ppid=5471 pid=5482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:20.678771 kernel: audit: type=1327 audit(1768598480.636:727): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665616239646330613438396336303163313463346161316133643738 Jan 16 21:21:20.678808 kernel: audit: type=1334 audit(1768598480.636:728): prog-id=252 op=UNLOAD Jan 16 21:21:20.678828 kernel: audit: type=1300 audit(1768598480.636:728): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5471 pid=5482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:20.678847 kernel: audit: type=1327 audit(1768598480.636:728): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665616239646330613438396336303163313463346161316133643738 Jan 16 21:21:20.678869 kernel: audit: type=1334 audit(1768598480.636:729): prog-id=253 op=LOAD Jan 16 21:21:20.678893 kernel: audit: type=1300 audit(1768598480.636:729): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0000b2488 a2=98 a3=0 items=0 ppid=5471 pid=5482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:20.678925 kernel: audit: type=1327 audit(1768598480.636:729): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665616239646330613438396336303163313463346161316133643738 Jan 16 21:21:20.636000 audit[5482]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0000b2218 a2=98 a3=0 items=0 ppid=5471 pid=5482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:20.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665616239646330613438396336303163313463346161316133643738 Jan 16 21:21:20.640000 audit: BPF prog-id=254 op=UNLOAD Jan 16 21:21:20.640000 audit[5482]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5471 pid=5482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:20.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665616239646330613438396336303163313463346161316133643738 Jan 16 21:21:20.640000 audit: BPF prog-id=253 op=UNLOAD Jan 16 21:21:20.640000 audit[5482]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5471 pid=5482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:20.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665616239646330613438396336303163313463346161316133643738 Jan 16 21:21:20.640000 audit: BPF prog-id=255 op=LOAD Jan 16 21:21:20.640000 audit[5482]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0000b26e8 a2=98 a3=0 items=0 ppid=5471 pid=5482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:20.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665616239646330613438396336303163313463346161316133643738 Jan 16 21:21:20.819877 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 16 21:21:20.911785 kubelet[2889]: I0116 21:21:20.881214 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-j4h6j" podStartSLOduration=139.8811932 podStartE2EDuration="2m19.8811932s" podCreationTimestamp="2026-01-16 21:19:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 21:21:20.62298897 +0000 UTC m=+145.181291951" watchObservedRunningTime="2026-01-16 21:21:20.8811932 +0000 UTC m=+145.439496171" Jan 16 21:21:21.102231 kubelet[2889]: E0116 21:21:21.061948 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:21.212770 systemd-networkd[1517]: cali0f6a3072d97: Gained IPv6LL Jan 16 21:21:21.285958 containerd[1592]: time="2026-01-16T21:21:21.285148446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c7c579b5f-dsr85,Uid:f80ed623-af1a-45e7-a125-0c7c2229f592,Namespace:calico-system,Attempt:0,} returns sandbox id \"feab9dc0a489c601c14c4aa1a3d7837f546ddd5d189163bd26b972f63a672083\"" Jan 16 21:21:21.340954 containerd[1592]: time="2026-01-16T21:21:21.336569305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 21:21:21.507535 containerd[1592]: time="2026-01-16T21:21:21.502234752Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:21:21.527229 containerd[1592]: time="2026-01-16T21:21:21.526011621Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 21:21:21.527229 containerd[1592]: time="2026-01-16T21:21:21.526196295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 16 21:21:21.527734 kubelet[2889]: E0116 21:21:21.527072 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 21:21:21.527734 kubelet[2889]: E0116 21:21:21.527120 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 21:21:21.527734 kubelet[2889]: E0116 21:21:21.527210 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7c7c579b5f-dsr85_calico-system(f80ed623-af1a-45e7-a125-0c7c2229f592): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 21:21:21.537690 kubelet[2889]: E0116 21:21:21.527249 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" podUID="f80ed623-af1a-45e7-a125-0c7c2229f592" Jan 16 21:21:21.598527 kubelet[2889]: E0116 21:21:21.593000 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:21.611514 kubelet[2889]: E0116 21:21:21.602712 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:21.650000 audit[5527]: NETFILTER_CFG table=filter:138 family=2 entries=17 op=nft_register_rule pid=5527 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:21.650000 audit[5527]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff13567b00 a2=0 a3=7fff13567aec items=0 ppid=3054 pid=5527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:21.650000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:21.656878 kubelet[2889]: E0116 21:21:21.649765 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:21:21.668000 audit[5527]: NETFILTER_CFG table=nat:139 family=2 entries=35 op=nft_register_chain pid=5527 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:21.668000 audit[5527]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fff13567b00 a2=0 a3=7fff13567aec items=0 ppid=3054 pid=5527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:21.668000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:22.278991 systemd[1]: Started sshd@7-10.0.0.34:22-10.0.0.1:40252.service - OpenSSH per-connection server daemon (10.0.0.1:40252). Jan 16 21:21:22.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.34:22-10.0.0.1:40252 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:22.584968 kubelet[2889]: E0116 21:21:22.581119 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:22.584968 kubelet[2889]: E0116 21:21:22.582064 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" podUID="f80ed623-af1a-45e7-a125-0c7c2229f592" Jan 16 21:21:22.650000 audit[5534]: USER_ACCT pid=5534 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:22.655000 audit[5534]: CRED_ACQ pid=5534 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:22.655000 audit[5534]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdaf5a1180 a2=3 a3=0 items=0 ppid=1 pid=5534 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:22.655000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:21:22.661952 sshd[5534]: Accepted publickey for core from 10.0.0.1 port 40252 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:21:22.659878 sshd-session[5534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:21:22.693743 systemd-logind[1570]: New session 9 of user core. Jan 16 21:21:22.723984 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 16 21:21:22.734000 audit[5534]: USER_START pid=5534 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:22.740000 audit[5542]: CRED_ACQ pid=5542 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:23.209000 audit[5553]: NETFILTER_CFG table=filter:140 family=2 entries=14 op=nft_register_rule pid=5553 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:23.209000 audit[5553]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe919e6860 a2=0 a3=7ffe919e684c items=0 ppid=3054 pid=5553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:23.209000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:23.286000 audit[5553]: NETFILTER_CFG table=nat:141 family=2 entries=56 op=nft_register_chain pid=5553 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:23.286000 audit[5553]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffe919e6860 a2=0 a3=7ffe919e684c items=0 ppid=3054 pid=5553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:23.286000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:23.398820 sshd[5542]: Connection closed by 10.0.0.1 port 40252 Jan 16 21:21:23.399865 sshd-session[5534]: pam_unix(sshd:session): session closed for user core Jan 16 21:21:23.404000 audit[5534]: USER_END pid=5534 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:23.406000 audit[5534]: CRED_DISP pid=5534 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:23.416039 systemd[1]: sshd@7-10.0.0.34:22-10.0.0.1:40252.service: Deactivated successfully. Jan 16 21:21:23.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.34:22-10.0.0.1:40252 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:23.425965 systemd[1]: session-9.scope: Deactivated successfully. Jan 16 21:21:23.430960 systemd-logind[1570]: Session 9 logged out. Waiting for processes to exit. Jan 16 21:21:23.442085 systemd-logind[1570]: Removed session 9. Jan 16 21:21:23.601883 kubelet[2889]: E0116 21:21:23.601074 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" podUID="f80ed623-af1a-45e7-a125-0c7c2229f592" Jan 16 21:21:26.064190 containerd[1592]: time="2026-01-16T21:21:26.063527089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 21:21:26.161964 containerd[1592]: time="2026-01-16T21:21:26.161112857Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:21:26.167820 containerd[1592]: time="2026-01-16T21:21:26.167582809Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 21:21:26.167820 containerd[1592]: time="2026-01-16T21:21:26.167803049Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 16 21:21:26.169144 kubelet[2889]: E0116 21:21:26.169099 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 21:21:26.174461 kubelet[2889]: E0116 21:21:26.173487 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 21:21:26.174461 kubelet[2889]: E0116 21:21:26.173569 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-77c5fd8d7d-qxwpz_calico-system(a053e2c3-3297-4e1f-bf5a-da2b545cc5db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 21:21:26.185494 containerd[1592]: time="2026-01-16T21:21:26.184916511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 21:21:26.258750 containerd[1592]: time="2026-01-16T21:21:26.258699885Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:21:26.265392 containerd[1592]: time="2026-01-16T21:21:26.265144350Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 21:21:26.265840 containerd[1592]: time="2026-01-16T21:21:26.265701316Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 16 21:21:26.267566 kubelet[2889]: E0116 21:21:26.266229 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 21:21:26.267566 kubelet[2889]: E0116 21:21:26.266745 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 21:21:26.267566 kubelet[2889]: E0116 21:21:26.266936 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-77c5fd8d7d-qxwpz_calico-system(a053e2c3-3297-4e1f-bf5a-da2b545cc5db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 21:21:26.267566 kubelet[2889]: E0116 21:21:26.266991 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77c5fd8d7d-qxwpz" podUID="a053e2c3-3297-4e1f-bf5a-da2b545cc5db" Jan 16 21:21:28.443936 systemd[1]: Started sshd@8-10.0.0.34:22-10.0.0.1:54452.service - OpenSSH per-connection server daemon (10.0.0.1:54452). Jan 16 21:21:28.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.34:22-10.0.0.1:54452 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:28.468248 kernel: kauditd_printk_skb: 35 callbacks suppressed Jan 16 21:21:28.468798 kernel: audit: type=1130 audit(1768598488.442:747): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.34:22-10.0.0.1:54452 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:28.701000 audit[5564]: USER_ACCT pid=5564 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:28.705086 sshd[5564]: Accepted publickey for core from 10.0.0.1 port 54452 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:21:28.717755 sshd-session[5564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:21:28.739207 systemd-logind[1570]: New session 10 of user core. Jan 16 21:21:28.751523 kernel: audit: type=1101 audit(1768598488.701:748): pid=5564 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:28.752732 kernel: audit: type=1103 audit(1768598488.709:749): pid=5564 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:28.709000 audit[5564]: CRED_ACQ pid=5564 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:28.798570 kernel: audit: type=1006 audit(1768598488.710:750): pid=5564 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 16 21:21:28.821913 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 16 21:21:28.710000 audit[5564]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc743fbba0 a2=3 a3=0 items=0 ppid=1 pid=5564 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:28.872501 kernel: audit: type=1300 audit(1768598488.710:750): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc743fbba0 a2=3 a3=0 items=0 ppid=1 pid=5564 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:28.710000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:21:28.896493 kernel: audit: type=1327 audit(1768598488.710:750): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:21:28.896721 kernel: audit: type=1105 audit(1768598488.831:751): pid=5564 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:28.831000 audit[5564]: USER_START pid=5564 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:28.838000 audit[5568]: CRED_ACQ pid=5568 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:29.011056 kernel: audit: type=1103 audit(1768598488.838:752): pid=5568 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:29.066947 containerd[1592]: time="2026-01-16T21:21:29.066872024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 21:21:29.157918 containerd[1592]: time="2026-01-16T21:21:29.157745044Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:21:29.170852 containerd[1592]: time="2026-01-16T21:21:29.170563330Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 21:21:29.170990 containerd[1592]: time="2026-01-16T21:21:29.170890840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 16 21:21:29.172128 kubelet[2889]: E0116 21:21:29.171779 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 21:21:29.172128 kubelet[2889]: E0116 21:21:29.171927 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 21:21:29.172128 kubelet[2889]: E0116 21:21:29.172010 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-dgkw9_calico-system(91f2e5a0-0976-4b7a-ac63-530715dff408): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 21:21:29.172128 kubelet[2889]: E0116 21:21:29.172053 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dgkw9" podUID="91f2e5a0-0976-4b7a-ac63-530715dff408" Jan 16 21:21:29.255875 sshd[5568]: Connection closed by 10.0.0.1 port 54452 Jan 16 21:21:29.257795 sshd-session[5564]: pam_unix(sshd:session): session closed for user core Jan 16 21:21:29.266000 audit[5564]: USER_END pid=5564 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:29.273043 systemd[1]: sshd@8-10.0.0.34:22-10.0.0.1:54452.service: Deactivated successfully. Jan 16 21:21:29.284927 systemd[1]: session-10.scope: Deactivated successfully. Jan 16 21:21:29.299744 systemd-logind[1570]: Session 10 logged out. Waiting for processes to exit. Jan 16 21:21:29.305708 systemd-logind[1570]: Removed session 10. Jan 16 21:21:29.335524 kernel: audit: type=1106 audit(1768598489.266:753): pid=5564 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:29.335783 kernel: audit: type=1104 audit(1768598489.266:754): pid=5564 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:29.266000 audit[5564]: CRED_DISP pid=5564 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:29.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.34:22-10.0.0.1:54452 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:31.074031 containerd[1592]: time="2026-01-16T21:21:31.069834636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 21:21:31.160755 containerd[1592]: time="2026-01-16T21:21:31.160095921Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:21:31.171901 containerd[1592]: time="2026-01-16T21:21:31.171737253Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 21:21:31.171901 containerd[1592]: time="2026-01-16T21:21:31.171849070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 21:21:31.173797 kubelet[2889]: E0116 21:21:31.173152 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:21:31.175200 kubelet[2889]: E0116 21:21:31.174866 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:21:31.175200 kubelet[2889]: E0116 21:21:31.174964 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6cb5984987-nz6bj_calico-apiserver(ac06912f-e290-4031-a848-1392298fa9de): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 21:21:31.175200 kubelet[2889]: E0116 21:21:31.175006 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" podUID="ac06912f-e290-4031-a848-1392298fa9de" Jan 16 21:21:34.290050 systemd[1]: Started sshd@9-10.0.0.34:22-10.0.0.1:44826.service - OpenSSH per-connection server daemon (10.0.0.1:44826). Jan 16 21:21:34.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.34:22-10.0.0.1:44826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:34.316892 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:21:34.317000 kernel: audit: type=1130 audit(1768598494.288:756): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.34:22-10.0.0.1:44826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:34.570028 sshd[5590]: Accepted publickey for core from 10.0.0.1 port 44826 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:21:34.567000 audit[5590]: USER_ACCT pid=5590 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:34.576839 sshd-session[5590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:21:34.621896 systemd-logind[1570]: New session 11 of user core. Jan 16 21:21:34.568000 audit[5590]: CRED_ACQ pid=5590 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:34.667956 kernel: audit: type=1101 audit(1768598494.567:757): pid=5590 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:34.671077 kernel: audit: type=1103 audit(1768598494.568:758): pid=5590 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:34.671125 kernel: audit: type=1006 audit(1768598494.568:759): pid=5590 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 16 21:21:34.568000 audit[5590]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe9c16470 a2=3 a3=0 items=0 ppid=1 pid=5590 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:34.742406 kernel: audit: type=1300 audit(1768598494.568:759): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe9c16470 a2=3 a3=0 items=0 ppid=1 pid=5590 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:34.568000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:21:34.761802 kernel: audit: type=1327 audit(1768598494.568:759): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:21:34.767250 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 16 21:21:34.783000 audit[5590]: USER_START pid=5590 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:34.842514 kernel: audit: type=1105 audit(1768598494.783:760): pid=5590 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:34.842805 kernel: audit: type=1103 audit(1768598494.796:761): pid=5596 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:34.796000 audit[5596]: CRED_ACQ pid=5596 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:35.100028 containerd[1592]: time="2026-01-16T21:21:35.098064038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 21:21:35.255750 containerd[1592]: time="2026-01-16T21:21:35.255586163Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:21:35.262934 containerd[1592]: time="2026-01-16T21:21:35.262774764Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 21:21:35.262934 containerd[1592]: time="2026-01-16T21:21:35.262888755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 21:21:35.263815 kubelet[2889]: E0116 21:21:35.263227 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:21:35.263815 kubelet[2889]: E0116 21:21:35.263518 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:21:35.263815 kubelet[2889]: E0116 21:21:35.263722 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6cb5984987-cb6w2_calico-apiserver(7e12a227-9190-436c-a55f-74274779eb32): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 21:21:35.263815 kubelet[2889]: E0116 21:21:35.263763 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" podUID="7e12a227-9190-436c-a55f-74274779eb32" Jan 16 21:21:35.409824 sshd[5596]: Connection closed by 10.0.0.1 port 44826 Jan 16 21:21:35.409719 sshd-session[5590]: pam_unix(sshd:session): session closed for user core Jan 16 21:21:35.420000 audit[5590]: USER_END pid=5590 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:35.433032 systemd[1]: sshd@9-10.0.0.34:22-10.0.0.1:44826.service: Deactivated successfully. Jan 16 21:21:35.435056 systemd-logind[1570]: Session 11 logged out. Waiting for processes to exit. Jan 16 21:21:35.440996 systemd[1]: session-11.scope: Deactivated successfully. Jan 16 21:21:35.450470 systemd-logind[1570]: Removed session 11. Jan 16 21:21:35.494208 kernel: audit: type=1106 audit(1768598495.420:762): pid=5590 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:35.420000 audit[5590]: CRED_DISP pid=5590 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:35.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.34:22-10.0.0.1:44826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:35.539514 kernel: audit: type=1104 audit(1768598495.420:763): pid=5590 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:36.075544 containerd[1592]: time="2026-01-16T21:21:36.074515722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 21:21:36.166974 containerd[1592]: time="2026-01-16T21:21:36.166921571Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:21:36.181765 containerd[1592]: time="2026-01-16T21:21:36.181489595Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 21:21:36.182813 containerd[1592]: time="2026-01-16T21:21:36.181918895Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 16 21:21:36.183774 kubelet[2889]: E0116 21:21:36.183485 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 21:21:36.183774 kubelet[2889]: E0116 21:21:36.183548 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 21:21:36.183774 kubelet[2889]: E0116 21:21:36.183761 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-6hngd_calico-system(b99000d7-a136-4299-82d0-76fa7e3c28f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 21:21:36.192062 containerd[1592]: time="2026-01-16T21:21:36.189548660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 21:21:36.277241 containerd[1592]: time="2026-01-16T21:21:36.276514414Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:21:36.288730 containerd[1592]: time="2026-01-16T21:21:36.288541333Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 21:21:36.292785 kubelet[2889]: E0116 21:21:36.289861 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 21:21:36.292785 kubelet[2889]: E0116 21:21:36.289913 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 21:21:36.292785 kubelet[2889]: E0116 21:21:36.289998 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-6hngd_calico-system(b99000d7-a136-4299-82d0-76fa7e3c28f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 21:21:36.292785 kubelet[2889]: E0116 21:21:36.290051 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:21:36.311178 containerd[1592]: time="2026-01-16T21:21:36.288797019Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 16 21:21:37.079903 kubelet[2889]: E0116 21:21:37.079788 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77c5fd8d7d-qxwpz" podUID="a053e2c3-3297-4e1f-bf5a-da2b545cc5db" Jan 16 21:21:37.083232 containerd[1592]: time="2026-01-16T21:21:37.082998121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 21:21:37.193411 containerd[1592]: time="2026-01-16T21:21:37.192592934Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:21:37.202522 containerd[1592]: time="2026-01-16T21:21:37.201536624Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 21:21:37.202522 containerd[1592]: time="2026-01-16T21:21:37.201790547Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 16 21:21:37.206015 kubelet[2889]: E0116 21:21:37.204226 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 21:21:37.206185 kubelet[2889]: E0116 21:21:37.206152 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 21:21:37.207573 kubelet[2889]: E0116 21:21:37.207538 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7c7c579b5f-dsr85_calico-system(f80ed623-af1a-45e7-a125-0c7c2229f592): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 21:21:37.208906 kubelet[2889]: E0116 21:21:37.208851 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" podUID="f80ed623-af1a-45e7-a125-0c7c2229f592" Jan 16 21:21:37.559735 kubelet[2889]: E0116 21:21:37.556808 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:40.472877 systemd[1]: Started sshd@10-10.0.0.34:22-10.0.0.1:44838.service - OpenSSH per-connection server daemon (10.0.0.1:44838). Jan 16 21:21:40.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.34:22-10.0.0.1:44838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:40.487730 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:21:40.487822 kernel: audit: type=1130 audit(1768598500.473:765): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.34:22-10.0.0.1:44838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:40.867000 audit[5638]: USER_ACCT pid=5638 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:40.875236 sshd-session[5638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:21:40.889828 sshd[5638]: Accepted publickey for core from 10.0.0.1 port 44838 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:21:40.894998 systemd-logind[1570]: New session 12 of user core. Jan 16 21:21:40.929994 kernel: audit: type=1101 audit(1768598500.867:766): pid=5638 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:40.872000 audit[5638]: CRED_ACQ pid=5638 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:41.007954 kernel: audit: type=1103 audit(1768598500.872:767): pid=5638 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:41.008088 kernel: audit: type=1006 audit(1768598500.872:768): pid=5638 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 16 21:21:41.008132 kernel: audit: type=1300 audit(1768598500.872:768): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb5f10420 a2=3 a3=0 items=0 ppid=1 pid=5638 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:40.872000 audit[5638]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb5f10420 a2=3 a3=0 items=0 ppid=1 pid=5638 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:41.056740 kernel: audit: type=1327 audit(1768598500.872:768): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:21:40.872000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:21:41.056881 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 16 21:21:41.074075 kernel: audit: type=1105 audit(1768598501.069:769): pid=5638 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:41.069000 audit[5638]: USER_START pid=5638 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:41.075000 audit[5642]: CRED_ACQ pid=5642 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:41.182857 kernel: audit: type=1103 audit(1768598501.075:770): pid=5642 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:41.561474 sshd[5642]: Connection closed by 10.0.0.1 port 44838 Jan 16 21:21:41.564156 sshd-session[5638]: pam_unix(sshd:session): session closed for user core Jan 16 21:21:41.573000 audit[5638]: USER_END pid=5638 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:41.587225 systemd[1]: sshd@10-10.0.0.34:22-10.0.0.1:44838.service: Deactivated successfully. Jan 16 21:21:41.593492 systemd[1]: session-12.scope: Deactivated successfully. Jan 16 21:21:41.597039 systemd-logind[1570]: Session 12 logged out. Waiting for processes to exit. Jan 16 21:21:41.605937 systemd-logind[1570]: Removed session 12. Jan 16 21:21:41.574000 audit[5638]: CRED_DISP pid=5638 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:41.678894 kernel: audit: type=1106 audit(1768598501.573:771): pid=5638 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:41.679022 kernel: audit: type=1104 audit(1768598501.574:772): pid=5638 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:41.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.34:22-10.0.0.1:44838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:42.067092 kubelet[2889]: E0116 21:21:42.067053 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:42.086460 kubelet[2889]: E0116 21:21:42.084846 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dgkw9" podUID="91f2e5a0-0976-4b7a-ac63-530715dff408" Jan 16 21:21:46.100581 kubelet[2889]: E0116 21:21:46.092154 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:46.100581 kubelet[2889]: E0116 21:21:46.096925 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" podUID="ac06912f-e290-4031-a848-1392298fa9de" Jan 16 21:21:46.606513 systemd[1]: Started sshd@11-10.0.0.34:22-10.0.0.1:52278.service - OpenSSH per-connection server daemon (10.0.0.1:52278). Jan 16 21:21:46.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.34:22-10.0.0.1:52278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:46.634770 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:21:46.634852 kernel: audit: type=1130 audit(1768598506.606:774): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.34:22-10.0.0.1:52278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:46.840154 sshd[5656]: Accepted publickey for core from 10.0.0.1 port 52278 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:21:46.838000 audit[5656]: USER_ACCT pid=5656 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:46.849188 sshd-session[5656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:21:46.875913 systemd-logind[1570]: New session 13 of user core. Jan 16 21:21:46.898997 kernel: audit: type=1101 audit(1768598506.838:775): pid=5656 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:46.899085 kernel: audit: type=1103 audit(1768598506.843:776): pid=5656 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:46.843000 audit[5656]: CRED_ACQ pid=5656 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:46.846000 audit[5656]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc4632a7e0 a2=3 a3=0 items=0 ppid=1 pid=5656 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:46.984100 kernel: audit: type=1006 audit(1768598506.846:777): pid=5656 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 16 21:21:46.984434 kernel: audit: type=1300 audit(1768598506.846:777): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc4632a7e0 a2=3 a3=0 items=0 ppid=1 pid=5656 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:47.047790 kernel: audit: type=1327 audit(1768598506.846:777): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:21:46.846000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:21:47.048526 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 16 21:21:47.063000 audit[5656]: USER_START pid=5656 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:47.137119 kernel: audit: type=1105 audit(1768598507.063:778): pid=5656 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:47.074000 audit[5660]: CRED_ACQ pid=5660 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:47.192762 kernel: audit: type=1103 audit(1768598507.074:779): pid=5660 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:47.546855 sshd[5660]: Connection closed by 10.0.0.1 port 52278 Jan 16 21:21:47.548226 sshd-session[5656]: pam_unix(sshd:session): session closed for user core Jan 16 21:21:47.549000 audit[5656]: USER_END pid=5656 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:47.562918 systemd[1]: sshd@11-10.0.0.34:22-10.0.0.1:52278.service: Deactivated successfully. Jan 16 21:21:47.572944 systemd[1]: session-13.scope: Deactivated successfully. Jan 16 21:21:47.581087 systemd-logind[1570]: Session 13 logged out. Waiting for processes to exit. Jan 16 21:21:47.585821 systemd-logind[1570]: Removed session 13. Jan 16 21:21:47.551000 audit[5656]: CRED_DISP pid=5656 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:47.676027 kernel: audit: type=1106 audit(1768598507.549:780): pid=5656 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:47.676169 kernel: audit: type=1104 audit(1768598507.551:781): pid=5656 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:47.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.34:22-10.0.0.1:52278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:48.075036 kubelet[2889]: E0116 21:21:48.071223 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:48.087775 kubelet[2889]: E0116 21:21:48.085800 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:21:49.082581 containerd[1592]: time="2026-01-16T21:21:49.082123450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 21:21:49.191167 containerd[1592]: time="2026-01-16T21:21:49.189787694Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:21:49.205136 containerd[1592]: time="2026-01-16T21:21:49.199074262Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 21:21:49.205136 containerd[1592]: time="2026-01-16T21:21:49.199175311Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 16 21:21:49.209211 kubelet[2889]: E0116 21:21:49.206583 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 21:21:49.209211 kubelet[2889]: E0116 21:21:49.207920 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 21:21:49.209211 kubelet[2889]: E0116 21:21:49.208021 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-77c5fd8d7d-qxwpz_calico-system(a053e2c3-3297-4e1f-bf5a-da2b545cc5db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 21:21:49.219219 containerd[1592]: time="2026-01-16T21:21:49.219182050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 21:21:49.326584 containerd[1592]: time="2026-01-16T21:21:49.326221785Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:21:49.377059 containerd[1592]: time="2026-01-16T21:21:49.373168021Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 21:21:49.391881 containerd[1592]: time="2026-01-16T21:21:49.374571676Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 16 21:21:49.401157 kubelet[2889]: E0116 21:21:49.401025 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 21:21:49.401157 kubelet[2889]: E0116 21:21:49.401083 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 21:21:49.401591 kubelet[2889]: E0116 21:21:49.401180 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-77c5fd8d7d-qxwpz_calico-system(a053e2c3-3297-4e1f-bf5a-da2b545cc5db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 21:21:49.401591 kubelet[2889]: E0116 21:21:49.401236 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77c5fd8d7d-qxwpz" podUID="a053e2c3-3297-4e1f-bf5a-da2b545cc5db" Jan 16 21:21:50.105248 kubelet[2889]: E0116 21:21:50.097097 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" podUID="f80ed623-af1a-45e7-a125-0c7c2229f592" Jan 16 21:21:51.079111 kubelet[2889]: E0116 21:21:51.079028 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" podUID="7e12a227-9190-436c-a55f-74274779eb32" Jan 16 21:21:52.594834 systemd[1]: Started sshd@12-10.0.0.34:22-10.0.0.1:59588.service - OpenSSH per-connection server daemon (10.0.0.1:59588). Jan 16 21:21:52.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.34:22-10.0.0.1:59588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:52.636518 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:21:52.636777 kernel: audit: type=1130 audit(1768598512.594:783): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.34:22-10.0.0.1:59588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:52.917000 audit[5686]: USER_ACCT pid=5686 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:52.922568 sshd[5686]: Accepted publickey for core from 10.0.0.1 port 59588 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:21:52.940769 sshd-session[5686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:21:52.991006 kernel: audit: type=1101 audit(1768598512.917:784): pid=5686 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:52.930000 audit[5686]: CRED_ACQ pid=5686 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:53.038110 systemd-logind[1570]: New session 14 of user core. Jan 16 21:21:53.055950 kernel: audit: type=1103 audit(1768598512.930:785): pid=5686 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:52.930000 audit[5686]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffceeae1d90 a2=3 a3=0 items=0 ppid=1 pid=5686 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:53.182608 kernel: audit: type=1006 audit(1768598512.930:786): pid=5686 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 16 21:21:53.183195 kernel: audit: type=1300 audit(1768598512.930:786): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffceeae1d90 a2=3 a3=0 items=0 ppid=1 pid=5686 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:52.930000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:21:53.196246 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 16 21:21:53.214547 kernel: audit: type=1327 audit(1768598512.930:786): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:21:53.242000 audit[5686]: USER_START pid=5686 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:53.317571 kernel: audit: type=1105 audit(1768598513.242:787): pid=5686 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:53.376949 kernel: audit: type=1103 audit(1768598513.253:788): pid=5698 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:53.253000 audit[5698]: CRED_ACQ pid=5698 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:53.731774 sshd[5698]: Connection closed by 10.0.0.1 port 59588 Jan 16 21:21:53.734934 sshd-session[5686]: pam_unix(sshd:session): session closed for user core Jan 16 21:21:53.749000 audit[5686]: USER_END pid=5686 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:53.779986 systemd[1]: sshd@12-10.0.0.34:22-10.0.0.1:59588.service: Deactivated successfully. Jan 16 21:21:53.790606 systemd[1]: session-14.scope: Deactivated successfully. Jan 16 21:21:53.795872 systemd-logind[1570]: Session 14 logged out. Waiting for processes to exit. Jan 16 21:21:53.800801 systemd-logind[1570]: Removed session 14. Jan 16 21:21:53.821185 kernel: audit: type=1106 audit(1768598513.749:789): pid=5686 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:53.821536 kernel: audit: type=1104 audit(1768598513.750:790): pid=5686 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:53.750000 audit[5686]: CRED_DISP pid=5686 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:53.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.34:22-10.0.0.1:59588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:57.090122 containerd[1592]: time="2026-01-16T21:21:57.082810695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 21:21:57.230201 containerd[1592]: time="2026-01-16T21:21:57.230114337Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:21:57.236002 containerd[1592]: time="2026-01-16T21:21:57.235247804Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 21:21:57.236894 containerd[1592]: time="2026-01-16T21:21:57.235962595Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 16 21:21:57.236958 kubelet[2889]: E0116 21:21:57.236755 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 21:21:57.236958 kubelet[2889]: E0116 21:21:57.236810 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 21:21:57.236958 kubelet[2889]: E0116 21:21:57.236896 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-dgkw9_calico-system(91f2e5a0-0976-4b7a-ac63-530715dff408): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 21:21:57.236958 kubelet[2889]: E0116 21:21:57.236939 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dgkw9" podUID="91f2e5a0-0976-4b7a-ac63-530715dff408" Jan 16 21:21:58.096251 containerd[1592]: time="2026-01-16T21:21:58.095830990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 21:21:58.189512 containerd[1592]: time="2026-01-16T21:21:58.189200044Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:21:58.209154 containerd[1592]: time="2026-01-16T21:21:58.209019053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 21:21:58.209154 containerd[1592]: time="2026-01-16T21:21:58.209070689Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 21:21:58.221543 kubelet[2889]: E0116 21:21:58.217064 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:21:58.221543 kubelet[2889]: E0116 21:21:58.217125 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:21:58.221543 kubelet[2889]: E0116 21:21:58.217205 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6cb5984987-nz6bj_calico-apiserver(ac06912f-e290-4031-a848-1392298fa9de): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 21:21:58.227184 kubelet[2889]: E0116 21:21:58.226040 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" podUID="ac06912f-e290-4031-a848-1392298fa9de" Jan 16 21:21:58.844221 systemd[1]: Started sshd@13-10.0.0.34:22-10.0.0.1:59598.service - OpenSSH per-connection server daemon (10.0.0.1:59598). Jan 16 21:21:58.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.34:22-10.0.0.1:59598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:58.859750 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:21:58.859853 kernel: audit: type=1130 audit(1768598518.844:792): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.34:22-10.0.0.1:59598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:59.112000 audit[5716]: USER_ACCT pid=5716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:59.175822 kernel: audit: type=1101 audit(1768598519.112:793): pid=5716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:59.145236 sshd-session[5716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:21:59.176237 sshd[5716]: Accepted publickey for core from 10.0.0.1 port 59598 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:21:59.136000 audit[5716]: CRED_ACQ pid=5716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:59.188178 systemd-logind[1570]: New session 15 of user core. Jan 16 21:21:59.231470 kernel: audit: type=1103 audit(1768598519.136:794): pid=5716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:59.249935 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 16 21:21:59.141000 audit[5716]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff29113140 a2=3 a3=0 items=0 ppid=1 pid=5716 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:59.362884 kernel: audit: type=1006 audit(1768598519.141:795): pid=5716 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 16 21:21:59.363021 kernel: audit: type=1300 audit(1768598519.141:795): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff29113140 a2=3 a3=0 items=0 ppid=1 pid=5716 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:59.141000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:21:59.396537 kernel: audit: type=1327 audit(1768598519.141:795): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:21:59.278000 audit[5716]: USER_START pid=5716 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:59.483503 kernel: audit: type=1105 audit(1768598519.278:796): pid=5716 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:59.293000 audit[5720]: CRED_ACQ pid=5720 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:59.551481 kernel: audit: type=1103 audit(1768598519.293:797): pid=5720 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:00.057045 sshd[5720]: Connection closed by 10.0.0.1 port 59598 Jan 16 21:22:00.058109 sshd-session[5716]: pam_unix(sshd:session): session closed for user core Jan 16 21:22:00.064000 audit[5716]: USER_END pid=5716 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:00.115854 containerd[1592]: time="2026-01-16T21:22:00.110897887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 21:22:00.140608 kernel: audit: type=1106 audit(1768598520.064:798): pid=5716 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:00.160517 kernel: audit: type=1104 audit(1768598520.065:799): pid=5716 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:00.065000 audit[5716]: CRED_DISP pid=5716 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:00.147609 systemd[1]: sshd@13-10.0.0.34:22-10.0.0.1:59598.service: Deactivated successfully. Jan 16 21:22:00.162031 systemd[1]: session-15.scope: Deactivated successfully. Jan 16 21:22:00.173509 systemd-logind[1570]: Session 15 logged out. Waiting for processes to exit. Jan 16 21:22:00.178237 systemd-logind[1570]: Removed session 15. Jan 16 21:22:00.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.34:22-10.0.0.1:59598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:00.224058 containerd[1592]: time="2026-01-16T21:22:00.223879258Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:22:00.242972 containerd[1592]: time="2026-01-16T21:22:00.241832617Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 16 21:22:00.250218 containerd[1592]: time="2026-01-16T21:22:00.249531400Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 21:22:00.251888 kubelet[2889]: E0116 21:22:00.251087 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 21:22:00.252567 kubelet[2889]: E0116 21:22:00.252220 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 21:22:00.252929 kubelet[2889]: E0116 21:22:00.252614 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-6hngd_calico-system(b99000d7-a136-4299-82d0-76fa7e3c28f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 21:22:00.263148 containerd[1592]: time="2026-01-16T21:22:00.262974944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 21:22:00.374246 containerd[1592]: time="2026-01-16T21:22:00.373242070Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:22:00.388571 containerd[1592]: time="2026-01-16T21:22:00.382154462Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 21:22:00.388571 containerd[1592]: time="2026-01-16T21:22:00.382816335Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 16 21:22:00.399227 kubelet[2889]: E0116 21:22:00.396925 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 21:22:00.399227 kubelet[2889]: E0116 21:22:00.396993 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 21:22:00.399227 kubelet[2889]: E0116 21:22:00.397089 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-6hngd_calico-system(b99000d7-a136-4299-82d0-76fa7e3c28f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 21:22:00.399227 kubelet[2889]: E0116 21:22:00.397149 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:22:01.092537 containerd[1592]: time="2026-01-16T21:22:01.090868876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 21:22:01.230998 containerd[1592]: time="2026-01-16T21:22:01.230946759Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:22:01.246207 containerd[1592]: time="2026-01-16T21:22:01.246149749Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 21:22:01.247036 containerd[1592]: time="2026-01-16T21:22:01.246615057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 16 21:22:01.249553 kubelet[2889]: E0116 21:22:01.249246 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 21:22:01.249969 kubelet[2889]: E0116 21:22:01.249943 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 21:22:01.250148 kubelet[2889]: E0116 21:22:01.250120 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7c7c579b5f-dsr85_calico-system(f80ed623-af1a-45e7-a125-0c7c2229f592): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 21:22:01.250481 kubelet[2889]: E0116 21:22:01.250236 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" podUID="f80ed623-af1a-45e7-a125-0c7c2229f592" Jan 16 21:22:02.133532 containerd[1592]: time="2026-01-16T21:22:02.131070085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 21:22:02.334203 containerd[1592]: time="2026-01-16T21:22:02.334036840Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:22:02.343467 containerd[1592]: time="2026-01-16T21:22:02.341509863Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 21:22:02.343467 containerd[1592]: time="2026-01-16T21:22:02.341859063Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 21:22:02.344082 kubelet[2889]: E0116 21:22:02.344039 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:22:02.346965 kubelet[2889]: E0116 21:22:02.345608 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:22:02.346965 kubelet[2889]: E0116 21:22:02.345825 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6cb5984987-cb6w2_calico-apiserver(7e12a227-9190-436c-a55f-74274779eb32): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 21:22:02.346965 kubelet[2889]: E0116 21:22:02.345866 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" podUID="7e12a227-9190-436c-a55f-74274779eb32" Jan 16 21:22:03.110399 kubelet[2889]: E0116 21:22:03.108010 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77c5fd8d7d-qxwpz" podUID="a053e2c3-3297-4e1f-bf5a-da2b545cc5db" Jan 16 21:22:05.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.34:22-10.0.0.1:54164 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:05.135505 systemd[1]: Started sshd@14-10.0.0.34:22-10.0.0.1:54164.service - OpenSSH per-connection server daemon (10.0.0.1:54164). Jan 16 21:22:05.277984 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:22:05.278143 kernel: audit: type=1130 audit(1768598525.133:801): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.34:22-10.0.0.1:54164 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:05.912000 audit[5739]: USER_ACCT pid=5739 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:05.924182 sshd[5739]: Accepted publickey for core from 10.0.0.1 port 54164 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:22:05.934172 sshd-session[5739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:22:05.981031 systemd-logind[1570]: New session 16 of user core. Jan 16 21:22:06.000996 kernel: audit: type=1101 audit(1768598525.912:802): pid=5739 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:05.926000 audit[5739]: CRED_ACQ pid=5739 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:06.090476 kernel: audit: type=1103 audit(1768598525.926:803): pid=5739 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:06.094247 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 16 21:22:06.207460 kernel: audit: type=1006 audit(1768598525.927:804): pid=5739 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 16 21:22:06.211067 kernel: audit: type=1300 audit(1768598525.927:804): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb5f53ab0 a2=3 a3=0 items=0 ppid=1 pid=5739 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:05.927000 audit[5739]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb5f53ab0 a2=3 a3=0 items=0 ppid=1 pid=5739 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:05.927000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:22:06.144000 audit[5739]: USER_START pid=5739 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:06.294232 kernel: audit: type=1327 audit(1768598525.927:804): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:22:06.301543 kernel: audit: type=1105 audit(1768598526.144:805): pid=5739 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:06.301768 kernel: audit: type=1103 audit(1768598526.173:806): pid=5743 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:06.173000 audit[5743]: CRED_ACQ pid=5743 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:07.353556 sshd[5743]: Connection closed by 10.0.0.1 port 54164 Jan 16 21:22:07.355862 sshd-session[5739]: pam_unix(sshd:session): session closed for user core Jan 16 21:22:07.365000 audit[5739]: USER_END pid=5739 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:07.366000 audit[5739]: CRED_DISP pid=5739 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:07.639884 kernel: audit: type=1106 audit(1768598527.365:807): pid=5739 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:07.640031 kernel: audit: type=1104 audit(1768598527.366:808): pid=5739 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:07.673187 systemd[1]: sshd@14-10.0.0.34:22-10.0.0.1:54164.service: Deactivated successfully. Jan 16 21:22:07.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.34:22-10.0.0.1:54164 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:07.685840 systemd[1]: session-16.scope: Deactivated successfully. Jan 16 21:22:07.693532 systemd-logind[1570]: Session 16 logged out. Waiting for processes to exit. Jan 16 21:22:07.705202 systemd[1]: Started sshd@15-10.0.0.34:22-10.0.0.1:54166.service - OpenSSH per-connection server daemon (10.0.0.1:54166). Jan 16 21:22:07.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.34:22-10.0.0.1:54166 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:07.725845 systemd-logind[1570]: Removed session 16. Jan 16 21:22:08.294000 audit[5783]: USER_ACCT pid=5783 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:08.314876 sshd[5783]: Accepted publickey for core from 10.0.0.1 port 54166 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:22:08.323000 audit[5783]: CRED_ACQ pid=5783 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:08.324000 audit[5783]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd6d1ecb80 a2=3 a3=0 items=0 ppid=1 pid=5783 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:08.324000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:22:08.392121 sshd-session[5783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:22:08.441232 systemd-logind[1570]: New session 17 of user core. Jan 16 21:22:08.482099 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 16 21:22:08.510000 audit[5783]: USER_START pid=5783 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:08.541000 audit[5788]: CRED_ACQ pid=5788 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:09.264805 sshd[5788]: Connection closed by 10.0.0.1 port 54166 Jan 16 21:22:09.262045 sshd-session[5783]: pam_unix(sshd:session): session closed for user core Jan 16 21:22:09.279000 audit[5783]: USER_END pid=5783 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:09.279000 audit[5783]: CRED_DISP pid=5783 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:09.314190 systemd[1]: sshd@15-10.0.0.34:22-10.0.0.1:54166.service: Deactivated successfully. Jan 16 21:22:09.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.34:22-10.0.0.1:54166 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:09.322195 systemd[1]: session-17.scope: Deactivated successfully. Jan 16 21:22:09.333762 systemd-logind[1570]: Session 17 logged out. Waiting for processes to exit. Jan 16 21:22:09.342585 systemd-logind[1570]: Removed session 17. Jan 16 21:22:09.347000 systemd[1]: Started sshd@16-10.0.0.34:22-10.0.0.1:54170.service - OpenSSH per-connection server daemon (10.0.0.1:54170). Jan 16 21:22:09.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.34:22-10.0.0.1:54170 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:09.733000 audit[5799]: USER_ACCT pid=5799 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:09.738530 sshd[5799]: Accepted publickey for core from 10.0.0.1 port 54170 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:22:09.746000 audit[5799]: CRED_ACQ pid=5799 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:09.746000 audit[5799]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca0320410 a2=3 a3=0 items=0 ppid=1 pid=5799 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:09.746000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:22:09.760246 sshd-session[5799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:22:09.821877 systemd-logind[1570]: New session 18 of user core. Jan 16 21:22:09.843988 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 16 21:22:09.887000 audit[5799]: USER_START pid=5799 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:09.912000 audit[5803]: CRED_ACQ pid=5803 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:10.611771 sshd[5803]: Connection closed by 10.0.0.1 port 54170 Jan 16 21:22:10.625035 sshd-session[5799]: pam_unix(sshd:session): session closed for user core Jan 16 21:22:10.636000 audit[5799]: USER_END pid=5799 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:10.662607 kernel: kauditd_printk_skb: 20 callbacks suppressed Jan 16 21:22:10.662850 kernel: audit: type=1106 audit(1768598530.636:825): pid=5799 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:10.665900 systemd[1]: sshd@16-10.0.0.34:22-10.0.0.1:54170.service: Deactivated successfully. Jan 16 21:22:10.677110 systemd[1]: session-18.scope: Deactivated successfully. Jan 16 21:22:10.686119 systemd-logind[1570]: Session 18 logged out. Waiting for processes to exit. Jan 16 21:22:10.693073 systemd-logind[1570]: Removed session 18. Jan 16 21:22:10.641000 audit[5799]: CRED_DISP pid=5799 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:10.844016 kernel: audit: type=1104 audit(1768598530.641:826): pid=5799 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:10.844125 kernel: audit: type=1131 audit(1768598530.667:827): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.34:22-10.0.0.1:54170 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:10.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.34:22-10.0.0.1:54170 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:11.077039 kubelet[2889]: E0116 21:22:11.069764 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dgkw9" podUID="91f2e5a0-0976-4b7a-ac63-530715dff408" Jan 16 21:22:12.112808 kubelet[2889]: E0116 21:22:12.112764 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:22:13.065148 kubelet[2889]: E0116 21:22:13.064936 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" podUID="ac06912f-e290-4031-a848-1392298fa9de" Jan 16 21:22:15.096223 kubelet[2889]: E0116 21:22:15.096166 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" podUID="7e12a227-9190-436c-a55f-74274779eb32" Jan 16 21:22:15.143626 kubelet[2889]: E0116 21:22:15.098759 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" podUID="f80ed623-af1a-45e7-a125-0c7c2229f592" Jan 16 21:22:15.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.34:22-10.0.0.1:35282 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:15.670009 systemd[1]: Started sshd@17-10.0.0.34:22-10.0.0.1:35282.service - OpenSSH per-connection server daemon (10.0.0.1:35282). Jan 16 21:22:15.727058 kernel: audit: type=1130 audit(1768598535.669:828): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.34:22-10.0.0.1:35282 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:15.972000 audit[5818]: USER_ACCT pid=5818 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:15.976041 sshd[5818]: Accepted publickey for core from 10.0.0.1 port 35282 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:22:15.991902 sshd-session[5818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:22:16.029104 systemd-logind[1570]: New session 19 of user core. Jan 16 21:22:16.072968 kernel: audit: type=1101 audit(1768598535.972:829): pid=5818 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:15.986000 audit[5818]: CRED_ACQ pid=5818 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:16.114027 kubelet[2889]: E0116 21:22:16.113892 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77c5fd8d7d-qxwpz" podUID="a053e2c3-3297-4e1f-bf5a-da2b545cc5db" Jan 16 21:22:16.174785 kernel: audit: type=1103 audit(1768598535.986:830): pid=5818 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:16.174921 kernel: audit: type=1006 audit(1768598535.986:831): pid=5818 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 16 21:22:15.986000 audit[5818]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1d4baa10 a2=3 a3=0 items=0 ppid=1 pid=5818 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:16.183902 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 16 21:22:16.272778 kernel: audit: type=1300 audit(1768598535.986:831): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1d4baa10 a2=3 a3=0 items=0 ppid=1 pid=5818 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:15.986000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:22:16.303783 kernel: audit: type=1327 audit(1768598535.986:831): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:22:16.199000 audit[5818]: USER_START pid=5818 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:16.435893 kernel: audit: type=1105 audit(1768598536.199:832): pid=5818 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:16.539895 kernel: audit: type=1103 audit(1768598536.201:833): pid=5822 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:16.201000 audit[5822]: CRED_ACQ pid=5822 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:17.148097 sshd[5822]: Connection closed by 10.0.0.1 port 35282 Jan 16 21:22:17.150865 sshd-session[5818]: pam_unix(sshd:session): session closed for user core Jan 16 21:22:17.171000 audit[5818]: USER_END pid=5818 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:17.192212 systemd[1]: sshd@17-10.0.0.34:22-10.0.0.1:35282.service: Deactivated successfully. Jan 16 21:22:17.201464 systemd-logind[1570]: Session 19 logged out. Waiting for processes to exit. Jan 16 21:22:17.214012 systemd[1]: session-19.scope: Deactivated successfully. Jan 16 21:22:17.240960 systemd-logind[1570]: Removed session 19. Jan 16 21:22:17.332170 kernel: audit: type=1106 audit(1768598537.171:834): pid=5818 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:17.172000 audit[5818]: CRED_DISP pid=5818 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:17.441863 kernel: audit: type=1104 audit(1768598537.172:835): pid=5818 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:17.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.34:22-10.0.0.1:35282 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:22.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.34:22-10.0.0.1:35292 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:22.288009 systemd[1]: Started sshd@18-10.0.0.34:22-10.0.0.1:35292.service - OpenSSH per-connection server daemon (10.0.0.1:35292). Jan 16 21:22:22.329140 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:22:22.329219 kernel: audit: type=1130 audit(1768598542.287:837): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.34:22-10.0.0.1:35292 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:22.736000 audit[5837]: USER_ACCT pid=5837 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:22.771415 sshd[5837]: Accepted publickey for core from 10.0.0.1 port 35292 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:22:22.804114 sshd-session[5837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:22:22.829893 kernel: audit: type=1101 audit(1768598542.736:838): pid=5837 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:22.795000 audit[5837]: CRED_ACQ pid=5837 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:22.922576 kernel: audit: type=1103 audit(1768598542.795:839): pid=5837 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:22.922855 kernel: audit: type=1006 audit(1768598542.795:840): pid=5837 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 16 21:22:22.898156 systemd-logind[1570]: New session 20 of user core. Jan 16 21:22:22.795000 audit[5837]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeefe3ada0 a2=3 a3=0 items=0 ppid=1 pid=5837 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:22.795000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:22:23.015515 kernel: audit: type=1300 audit(1768598542.795:840): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeefe3ada0 a2=3 a3=0 items=0 ppid=1 pid=5837 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:23.015779 kernel: audit: type=1327 audit(1768598542.795:840): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:22:23.017094 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 16 21:22:23.061000 audit[5837]: USER_START pid=5837 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:23.088131 kubelet[2889]: E0116 21:22:23.082246 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dgkw9" podUID="91f2e5a0-0976-4b7a-ac63-530715dff408" Jan 16 21:22:23.101781 kubelet[2889]: E0116 21:22:23.100569 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:22:23.142508 kernel: audit: type=1105 audit(1768598543.061:841): pid=5837 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:23.077000 audit[5841]: CRED_ACQ pid=5841 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:23.216121 kernel: audit: type=1103 audit(1768598543.077:842): pid=5841 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:24.022475 sshd[5841]: Connection closed by 10.0.0.1 port 35292 Jan 16 21:22:24.024522 sshd-session[5837]: pam_unix(sshd:session): session closed for user core Jan 16 21:22:24.038000 audit[5837]: USER_END pid=5837 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:24.047513 systemd-logind[1570]: Session 20 logged out. Waiting for processes to exit. Jan 16 21:22:24.051820 systemd[1]: sshd@18-10.0.0.34:22-10.0.0.1:35292.service: Deactivated successfully. Jan 16 21:22:24.103613 systemd[1]: session-20.scope: Deactivated successfully. Jan 16 21:22:24.123950 kubelet[2889]: E0116 21:22:24.121974 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" podUID="ac06912f-e290-4031-a848-1392298fa9de" Jan 16 21:22:24.135843 systemd-logind[1570]: Removed session 20. Jan 16 21:22:24.186568 kernel: audit: type=1106 audit(1768598544.038:843): pid=5837 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:24.038000 audit[5837]: CRED_DISP pid=5837 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:24.249618 kernel: audit: type=1104 audit(1768598544.038:844): pid=5837 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:24.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.34:22-10.0.0.1:35292 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:25.064533 kubelet[2889]: E0116 21:22:25.060585 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:22:27.096445 kubelet[2889]: E0116 21:22:27.093002 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" podUID="f80ed623-af1a-45e7-a125-0c7c2229f592" Jan 16 21:22:29.045523 systemd[1]: Started sshd@19-10.0.0.34:22-10.0.0.1:50550.service - OpenSSH per-connection server daemon (10.0.0.1:50550). Jan 16 21:22:29.079718 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:22:29.079854 kernel: audit: type=1130 audit(1768598549.044:846): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.34:22-10.0.0.1:50550 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:29.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.34:22-10.0.0.1:50550 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:29.234000 audit[5855]: USER_ACCT pid=5855 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:29.245042 sshd[5855]: Accepted publickey for core from 10.0.0.1 port 50550 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:22:29.249546 sshd-session[5855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:22:29.292399 kernel: audit: type=1101 audit(1768598549.234:847): pid=5855 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:29.242000 audit[5855]: CRED_ACQ pid=5855 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:29.315206 systemd-logind[1570]: New session 21 of user core. Jan 16 21:22:29.327400 kernel: audit: type=1103 audit(1768598549.242:848): pid=5855 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:29.332237 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 16 21:22:29.363956 kernel: audit: type=1006 audit(1768598549.243:849): pid=5855 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 16 21:22:29.243000 audit[5855]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd5f320210 a2=3 a3=0 items=0 ppid=1 pid=5855 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:29.415542 kernel: audit: type=1300 audit(1768598549.243:849): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd5f320210 a2=3 a3=0 items=0 ppid=1 pid=5855 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:29.415753 kernel: audit: type=1327 audit(1768598549.243:849): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:22:29.243000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:22:29.360000 audit[5855]: USER_START pid=5855 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:29.483557 kernel: audit: type=1105 audit(1768598549.360:850): pid=5855 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:29.373000 audit[5859]: CRED_ACQ pid=5859 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:29.524440 kernel: audit: type=1103 audit(1768598549.373:851): pid=5859 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:29.670734 sshd[5859]: Connection closed by 10.0.0.1 port 50550 Jan 16 21:22:29.672620 sshd-session[5855]: pam_unix(sshd:session): session closed for user core Jan 16 21:22:29.676000 audit[5855]: USER_END pid=5855 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:29.681798 systemd[1]: sshd@19-10.0.0.34:22-10.0.0.1:50550.service: Deactivated successfully. Jan 16 21:22:29.685239 systemd[1]: session-21.scope: Deactivated successfully. Jan 16 21:22:29.694436 systemd-logind[1570]: Session 21 logged out. Waiting for processes to exit. Jan 16 21:22:29.696223 systemd-logind[1570]: Removed session 21. Jan 16 21:22:29.677000 audit[5855]: CRED_DISP pid=5855 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:29.742633 kernel: audit: type=1106 audit(1768598549.676:852): pid=5855 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:29.742853 kernel: audit: type=1104 audit(1768598549.677:853): pid=5855 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:29.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.34:22-10.0.0.1:50550 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:30.064209 kubelet[2889]: E0116 21:22:30.061199 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:22:30.071327 kubelet[2889]: E0116 21:22:30.071021 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" podUID="7e12a227-9190-436c-a55f-74274779eb32" Jan 16 21:22:30.074023 containerd[1592]: time="2026-01-16T21:22:30.073771296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 21:22:30.215036 containerd[1592]: time="2026-01-16T21:22:30.214901717Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:22:30.219815 containerd[1592]: time="2026-01-16T21:22:30.219586123Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 21:22:30.219815 containerd[1592]: time="2026-01-16T21:22:30.219789272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 16 21:22:30.220072 kubelet[2889]: E0116 21:22:30.219965 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 21:22:30.220072 kubelet[2889]: E0116 21:22:30.220011 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 21:22:30.220182 kubelet[2889]: E0116 21:22:30.220094 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-77c5fd8d7d-qxwpz_calico-system(a053e2c3-3297-4e1f-bf5a-da2b545cc5db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 21:22:30.229792 containerd[1592]: time="2026-01-16T21:22:30.228976441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 21:22:30.317195 containerd[1592]: time="2026-01-16T21:22:30.313637336Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:22:30.321887 containerd[1592]: time="2026-01-16T21:22:30.321419555Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 21:22:30.321887 containerd[1592]: time="2026-01-16T21:22:30.321618477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 16 21:22:30.322992 kubelet[2889]: E0116 21:22:30.322931 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 21:22:30.324781 kubelet[2889]: E0116 21:22:30.323146 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 21:22:30.324781 kubelet[2889]: E0116 21:22:30.323537 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-77c5fd8d7d-qxwpz_calico-system(a053e2c3-3297-4e1f-bf5a-da2b545cc5db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 21:22:30.324781 kubelet[2889]: E0116 21:22:30.323605 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77c5fd8d7d-qxwpz" podUID="a053e2c3-3297-4e1f-bf5a-da2b545cc5db" Jan 16 21:22:34.715470 systemd[1]: Started sshd@20-10.0.0.34:22-10.0.0.1:53712.service - OpenSSH per-connection server daemon (10.0.0.1:53712). Jan 16 21:22:34.739926 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:22:34.739967 kernel: audit: type=1130 audit(1768598554.715:855): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.34:22-10.0.0.1:53712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:34.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.34:22-10.0.0.1:53712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:34.919000 audit[5880]: USER_ACCT pid=5880 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:34.921950 sshd[5880]: Accepted publickey for core from 10.0.0.1 port 53712 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:22:34.932083 sshd-session[5880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:22:34.928000 audit[5880]: CRED_ACQ pid=5880 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:34.982898 systemd-logind[1570]: New session 22 of user core. Jan 16 21:22:35.014168 kernel: audit: type=1101 audit(1768598554.919:856): pid=5880 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:35.014424 kernel: audit: type=1103 audit(1768598554.928:857): pid=5880 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:35.039607 kernel: audit: type=1006 audit(1768598554.928:858): pid=5880 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 16 21:22:35.039803 kernel: audit: type=1300 audit(1768598554.928:858): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1cf68190 a2=3 a3=0 items=0 ppid=1 pid=5880 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:34.928000 audit[5880]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1cf68190 a2=3 a3=0 items=0 ppid=1 pid=5880 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:35.061963 kubelet[2889]: E0116 21:22:35.061223 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:22:35.090891 kernel: audit: type=1327 audit(1768598554.928:858): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:22:34.928000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:22:35.087981 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 16 21:22:35.108000 audit[5880]: USER_START pid=5880 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:35.179449 kernel: audit: type=1105 audit(1768598555.108:859): pid=5880 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:35.179569 kernel: audit: type=1103 audit(1768598555.128:860): pid=5884 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:35.128000 audit[5884]: CRED_ACQ pid=5884 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:35.563197 sshd[5884]: Connection closed by 10.0.0.1 port 53712 Jan 16 21:22:35.563988 sshd-session[5880]: pam_unix(sshd:session): session closed for user core Jan 16 21:22:35.562000 audit[5880]: USER_END pid=5880 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:35.573629 systemd[1]: sshd@20-10.0.0.34:22-10.0.0.1:53712.service: Deactivated successfully. Jan 16 21:22:35.581976 systemd[1]: session-22.scope: Deactivated successfully. Jan 16 21:22:35.586759 systemd-logind[1570]: Session 22 logged out. Waiting for processes to exit. Jan 16 21:22:35.590221 systemd-logind[1570]: Removed session 22. Jan 16 21:22:35.566000 audit[5880]: CRED_DISP pid=5880 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:35.637412 kernel: audit: type=1106 audit(1768598555.562:861): pid=5880 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:35.637521 kernel: audit: type=1104 audit(1768598555.566:862): pid=5880 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:35.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.34:22-10.0.0.1:53712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:36.066890 kubelet[2889]: E0116 21:22:36.062892 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dgkw9" podUID="91f2e5a0-0976-4b7a-ac63-530715dff408" Jan 16 21:22:36.075193 kubelet[2889]: E0116 21:22:36.073620 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:22:37.059975 kubelet[2889]: E0116 21:22:37.059564 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" podUID="ac06912f-e290-4031-a848-1392298fa9de" Jan 16 21:22:38.061117 kubelet[2889]: E0116 21:22:38.060754 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" podUID="f80ed623-af1a-45e7-a125-0c7c2229f592" Jan 16 21:22:40.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.34:22-10.0.0.1:53720 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:40.596964 systemd[1]: Started sshd@21-10.0.0.34:22-10.0.0.1:53720.service - OpenSSH per-connection server daemon (10.0.0.1:53720). Jan 16 21:22:40.604209 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:22:40.604523 kernel: audit: type=1130 audit(1768598560.596:864): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.34:22-10.0.0.1:53720 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:40.895952 sshd[5923]: Accepted publickey for core from 10.0.0.1 port 53720 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:22:40.893000 audit[5923]: USER_ACCT pid=5923 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:40.907837 sshd-session[5923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:22:40.901000 audit[5923]: CRED_ACQ pid=5923 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:40.943848 systemd-logind[1570]: New session 23 of user core. Jan 16 21:22:40.962037 kernel: audit: type=1101 audit(1768598560.893:865): pid=5923 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:40.962155 kernel: audit: type=1103 audit(1768598560.901:866): pid=5923 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:41.002940 kernel: audit: type=1006 audit(1768598560.901:867): pid=5923 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 16 21:22:40.901000 audit[5923]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc9e975900 a2=3 a3=0 items=0 ppid=1 pid=5923 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:41.004958 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 16 21:22:41.043358 kernel: audit: type=1300 audit(1768598560.901:867): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc9e975900 a2=3 a3=0 items=0 ppid=1 pid=5923 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:41.043480 kernel: audit: type=1327 audit(1768598560.901:867): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:22:40.901000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:22:41.064193 kubelet[2889]: E0116 21:22:41.062972 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" podUID="7e12a227-9190-436c-a55f-74274779eb32" Jan 16 21:22:41.025000 audit[5923]: USER_START pid=5923 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:41.129193 kernel: audit: type=1105 audit(1768598561.025:868): pid=5923 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:41.031000 audit[5927]: CRED_ACQ pid=5927 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:41.178503 kernel: audit: type=1103 audit(1768598561.031:869): pid=5927 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:41.433499 sshd[5927]: Connection closed by 10.0.0.1 port 53720 Jan 16 21:22:41.434799 sshd-session[5923]: pam_unix(sshd:session): session closed for user core Jan 16 21:22:41.439000 audit[5923]: USER_END pid=5923 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:41.448761 systemd[1]: sshd@21-10.0.0.34:22-10.0.0.1:53720.service: Deactivated successfully. Jan 16 21:22:41.465582 systemd[1]: session-23.scope: Deactivated successfully. Jan 16 21:22:41.477129 systemd-logind[1570]: Session 23 logged out. Waiting for processes to exit. Jan 16 21:22:41.480087 systemd-logind[1570]: Removed session 23. Jan 16 21:22:41.440000 audit[5923]: CRED_DISP pid=5923 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:41.514636 kernel: audit: type=1106 audit(1768598561.439:870): pid=5923 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:41.514867 kernel: audit: type=1104 audit(1768598561.440:871): pid=5923 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:41.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.34:22-10.0.0.1:53720 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:42.084831 kubelet[2889]: E0116 21:22:42.084131 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:22:45.079218 kubelet[2889]: E0116 21:22:45.078630 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77c5fd8d7d-qxwpz" podUID="a053e2c3-3297-4e1f-bf5a-da2b545cc5db" Jan 16 21:22:46.488858 systemd[1]: Started sshd@22-10.0.0.34:22-10.0.0.1:49038.service - OpenSSH per-connection server daemon (10.0.0.1:49038). Jan 16 21:22:46.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.34:22-10.0.0.1:49038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:46.499021 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:22:46.499105 kernel: audit: type=1130 audit(1768598566.488:873): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.34:22-10.0.0.1:49038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:46.699000 audit[5961]: USER_ACCT pid=5961 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:46.713815 sshd-session[5961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:22:46.722615 sshd[5961]: Accepted publickey for core from 10.0.0.1 port 49038 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:22:46.731199 systemd-logind[1570]: New session 24 of user core. Jan 16 21:22:46.738473 kernel: audit: type=1101 audit(1768598566.699:874): pid=5961 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:46.705000 audit[5961]: CRED_ACQ pid=5961 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:46.824796 kernel: audit: type=1103 audit(1768598566.705:875): pid=5961 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:46.824950 kernel: audit: type=1006 audit(1768598566.705:876): pid=5961 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 16 21:22:46.705000 audit[5961]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4bae68a0 a2=3 a3=0 items=0 ppid=1 pid=5961 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:46.878781 kernel: audit: type=1300 audit(1768598566.705:876): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4bae68a0 a2=3 a3=0 items=0 ppid=1 pid=5961 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:46.879146 kernel: audit: type=1327 audit(1768598566.705:876): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:22:46.705000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:22:46.879049 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 16 21:22:46.899000 audit[5961]: USER_START pid=5961 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:46.904000 audit[5965]: CRED_ACQ pid=5965 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:46.997018 kernel: audit: type=1105 audit(1768598566.899:877): pid=5961 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:46.997146 kernel: audit: type=1103 audit(1768598566.904:878): pid=5965 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:47.088741 containerd[1592]: time="2026-01-16T21:22:47.085029576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 21:22:47.168842 containerd[1592]: time="2026-01-16T21:22:47.168754160Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:22:47.180798 containerd[1592]: time="2026-01-16T21:22:47.179057073Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 21:22:47.180798 containerd[1592]: time="2026-01-16T21:22:47.179436410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 16 21:22:47.180969 kubelet[2889]: E0116 21:22:47.179642 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 21:22:47.180969 kubelet[2889]: E0116 21:22:47.179809 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 21:22:47.180969 kubelet[2889]: E0116 21:22:47.179907 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-dgkw9_calico-system(91f2e5a0-0976-4b7a-ac63-530715dff408): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 21:22:47.180969 kubelet[2889]: E0116 21:22:47.180427 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dgkw9" podUID="91f2e5a0-0976-4b7a-ac63-530715dff408" Jan 16 21:22:47.267579 sshd[5965]: Connection closed by 10.0.0.1 port 49038 Jan 16 21:22:47.274652 sshd-session[5961]: pam_unix(sshd:session): session closed for user core Jan 16 21:22:47.289000 audit[5961]: USER_END pid=5961 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:47.298927 systemd[1]: sshd@22-10.0.0.34:22-10.0.0.1:49038.service: Deactivated successfully. Jan 16 21:22:47.304174 systemd[1]: session-24.scope: Deactivated successfully. Jan 16 21:22:47.311091 systemd-logind[1570]: Session 24 logged out. Waiting for processes to exit. Jan 16 21:22:47.320491 systemd-logind[1570]: Removed session 24. Jan 16 21:22:47.353941 kernel: audit: type=1106 audit(1768598567.289:879): pid=5961 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:47.289000 audit[5961]: CRED_DISP pid=5961 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:47.397958 kernel: audit: type=1104 audit(1768598567.289:880): pid=5961 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:47.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.34:22-10.0.0.1:49038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:50.069762 kubelet[2889]: E0116 21:22:50.063099 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:22:50.073604 containerd[1592]: time="2026-01-16T21:22:50.072765071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 21:22:50.177195 containerd[1592]: time="2026-01-16T21:22:50.174072236Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:22:50.184760 containerd[1592]: time="2026-01-16T21:22:50.184114251Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 21:22:50.184760 containerd[1592]: time="2026-01-16T21:22:50.184226121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 16 21:22:50.184938 kubelet[2889]: E0116 21:22:50.184612 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 21:22:50.184938 kubelet[2889]: E0116 21:22:50.184759 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 21:22:50.184938 kubelet[2889]: E0116 21:22:50.184845 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-6hngd_calico-system(b99000d7-a136-4299-82d0-76fa7e3c28f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 21:22:50.194119 containerd[1592]: time="2026-01-16T21:22:50.192019669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 21:22:50.329980 containerd[1592]: time="2026-01-16T21:22:50.329783528Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:22:50.334879 containerd[1592]: time="2026-01-16T21:22:50.334490341Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 21:22:50.334879 containerd[1592]: time="2026-01-16T21:22:50.334767126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 16 21:22:50.335194 kubelet[2889]: E0116 21:22:50.335011 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 21:22:50.335194 kubelet[2889]: E0116 21:22:50.335067 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 21:22:50.335194 kubelet[2889]: E0116 21:22:50.335154 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-6hngd_calico-system(b99000d7-a136-4299-82d0-76fa7e3c28f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 21:22:50.335586 kubelet[2889]: E0116 21:22:50.335204 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:22:51.084987 containerd[1592]: time="2026-01-16T21:22:51.083597038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 21:22:51.187638 containerd[1592]: time="2026-01-16T21:22:51.184152884Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:22:51.191862 containerd[1592]: time="2026-01-16T21:22:51.191585469Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 21:22:51.191862 containerd[1592]: time="2026-01-16T21:22:51.191775923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 21:22:51.192612 kubelet[2889]: E0116 21:22:51.192095 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:22:51.192612 kubelet[2889]: E0116 21:22:51.192229 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:22:51.192612 kubelet[2889]: E0116 21:22:51.192465 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6cb5984987-nz6bj_calico-apiserver(ac06912f-e290-4031-a848-1392298fa9de): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 21:22:51.192612 kubelet[2889]: E0116 21:22:51.192512 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" podUID="ac06912f-e290-4031-a848-1392298fa9de" Jan 16 21:22:52.092472 containerd[1592]: time="2026-01-16T21:22:52.081239060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 21:22:52.175942 containerd[1592]: time="2026-01-16T21:22:52.175880353Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:22:52.187079 containerd[1592]: time="2026-01-16T21:22:52.186994107Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 21:22:52.193635 containerd[1592]: time="2026-01-16T21:22:52.187493457Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 16 21:22:52.195897 kubelet[2889]: E0116 21:22:52.188427 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 21:22:52.195897 kubelet[2889]: E0116 21:22:52.194410 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 21:22:52.195897 kubelet[2889]: E0116 21:22:52.194538 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7c7c579b5f-dsr85_calico-system(f80ed623-af1a-45e7-a125-0c7c2229f592): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 21:22:52.195897 kubelet[2889]: E0116 21:22:52.194584 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" podUID="f80ed623-af1a-45e7-a125-0c7c2229f592" Jan 16 21:22:52.329927 systemd[1]: Started sshd@23-10.0.0.34:22-10.0.0.1:49048.service - OpenSSH per-connection server daemon (10.0.0.1:49048). Jan 16 21:22:52.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.34:22-10.0.0.1:49048 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:52.344355 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:22:52.344439 kernel: audit: type=1130 audit(1768598572.329:882): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.34:22-10.0.0.1:49048 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:52.731000 audit[5978]: USER_ACCT pid=5978 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:52.733539 sshd[5978]: Accepted publickey for core from 10.0.0.1 port 49048 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:22:52.740018 sshd-session[5978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:22:52.788136 systemd-logind[1570]: New session 25 of user core. Jan 16 21:22:52.799119 kernel: audit: type=1101 audit(1768598572.731:883): pid=5978 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:52.736000 audit[5978]: CRED_ACQ pid=5978 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:52.843965 kernel: audit: type=1103 audit(1768598572.736:884): pid=5978 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:52.844116 kernel: audit: type=1006 audit(1768598572.736:885): pid=5978 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 16 21:22:52.873214 kernel: audit: type=1300 audit(1768598572.736:885): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff5664ea90 a2=3 a3=0 items=0 ppid=1 pid=5978 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:52.736000 audit[5978]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff5664ea90 a2=3 a3=0 items=0 ppid=1 pid=5978 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:52.871223 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 16 21:22:52.925586 kernel: audit: type=1327 audit(1768598572.736:885): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:22:52.736000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:22:52.891000 audit[5978]: USER_START pid=5978 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:53.005057 kernel: audit: type=1105 audit(1768598572.891:886): pid=5978 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:53.005937 kernel: audit: type=1103 audit(1768598572.899:887): pid=5982 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:52.899000 audit[5982]: CRED_ACQ pid=5982 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:53.088933 containerd[1592]: time="2026-01-16T21:22:53.086073339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 21:22:53.193353 containerd[1592]: time="2026-01-16T21:22:53.192578307Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:22:53.200173 containerd[1592]: time="2026-01-16T21:22:53.199588315Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 21:22:53.200173 containerd[1592]: time="2026-01-16T21:22:53.199856745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 21:22:53.201008 kubelet[2889]: E0116 21:22:53.200784 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:22:53.201008 kubelet[2889]: E0116 21:22:53.200928 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:22:53.201800 kubelet[2889]: E0116 21:22:53.201024 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6cb5984987-cb6w2_calico-apiserver(7e12a227-9190-436c-a55f-74274779eb32): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 21:22:53.201800 kubelet[2889]: E0116 21:22:53.201073 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" podUID="7e12a227-9190-436c-a55f-74274779eb32" Jan 16 21:22:53.340577 sshd[5982]: Connection closed by 10.0.0.1 port 49048 Jan 16 21:22:53.343114 sshd-session[5978]: pam_unix(sshd:session): session closed for user core Jan 16 21:22:53.346000 audit[5978]: USER_END pid=5978 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:53.372055 systemd[1]: sshd@23-10.0.0.34:22-10.0.0.1:49048.service: Deactivated successfully. Jan 16 21:22:53.381248 systemd[1]: session-25.scope: Deactivated successfully. Jan 16 21:22:53.390139 systemd-logind[1570]: Session 25 logged out. Waiting for processes to exit. Jan 16 21:22:53.399902 systemd-logind[1570]: Removed session 25. Jan 16 21:22:53.351000 audit[5978]: CRED_DISP pid=5978 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:53.452827 kernel: audit: type=1106 audit(1768598573.346:888): pid=5978 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:53.452963 kernel: audit: type=1104 audit(1768598573.351:889): pid=5978 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:53.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.34:22-10.0.0.1:49048 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:54.063139 kubelet[2889]: E0116 21:22:54.062094 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:22:56.181043 systemd[1715]: Created slice background.slice - User Background Tasks Slice. Jan 16 21:22:56.200541 systemd[1715]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Jan 16 21:22:56.290147 systemd[1715]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Jan 16 21:22:58.060247 kubelet[2889]: E0116 21:22:58.060196 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dgkw9" podUID="91f2e5a0-0976-4b7a-ac63-530715dff408" Jan 16 21:22:58.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.34:22-10.0.0.1:43340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:58.381108 systemd[1]: Started sshd@24-10.0.0.34:22-10.0.0.1:43340.service - OpenSSH per-connection server daemon (10.0.0.1:43340). Jan 16 21:22:58.404600 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:22:58.404787 kernel: audit: type=1130 audit(1768598578.380:891): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.34:22-10.0.0.1:43340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:58.572000 audit[6000]: USER_ACCT pid=6000 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:58.579842 sshd[6000]: Accepted publickey for core from 10.0.0.1 port 43340 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:22:58.595060 sshd-session[6000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:22:58.585000 audit[6000]: CRED_ACQ pid=6000 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:58.627214 systemd-logind[1570]: New session 26 of user core. Jan 16 21:22:58.638951 kernel: audit: type=1101 audit(1768598578.572:892): pid=6000 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:58.639041 kernel: audit: type=1103 audit(1768598578.585:893): pid=6000 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:58.643117 kernel: audit: type=1006 audit(1768598578.585:894): pid=6000 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 16 21:22:58.647912 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 16 21:22:58.676715 kernel: audit: type=1300 audit(1768598578.585:894): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff722505c0 a2=3 a3=0 items=0 ppid=1 pid=6000 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:58.585000 audit[6000]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff722505c0 a2=3 a3=0 items=0 ppid=1 pid=6000 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:58.585000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:22:58.758529 kernel: audit: type=1327 audit(1768598578.585:894): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:22:58.662000 audit[6000]: USER_START pid=6000 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:58.668000 audit[6004]: CRED_ACQ pid=6004 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:58.847865 kernel: audit: type=1105 audit(1768598578.662:895): pid=6000 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:58.848001 kernel: audit: type=1103 audit(1768598578.668:896): pid=6004 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:59.087551 kubelet[2889]: E0116 21:22:59.087465 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77c5fd8d7d-qxwpz" podUID="a053e2c3-3297-4e1f-bf5a-da2b545cc5db" Jan 16 21:22:59.134807 sshd[6004]: Connection closed by 10.0.0.1 port 43340 Jan 16 21:22:59.135881 sshd-session[6000]: pam_unix(sshd:session): session closed for user core Jan 16 21:22:59.142000 audit[6000]: USER_END pid=6000 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:59.167820 systemd[1]: sshd@24-10.0.0.34:22-10.0.0.1:43340.service: Deactivated successfully. Jan 16 21:22:59.183813 systemd[1]: session-26.scope: Deactivated successfully. Jan 16 21:22:59.190162 systemd-logind[1570]: Session 26 logged out. Waiting for processes to exit. Jan 16 21:22:59.195250 systemd-logind[1570]: Removed session 26. Jan 16 21:22:59.197539 kernel: audit: type=1106 audit(1768598579.142:897): pid=6000 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:59.197716 kernel: audit: type=1104 audit(1768598579.143:898): pid=6000 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:59.143000 audit[6000]: CRED_DISP pid=6000 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:59.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.34:22-10.0.0.1:43340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:02.121514 kubelet[2889]: E0116 21:23:02.114155 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:23:03.059094 kubelet[2889]: E0116 21:23:03.058237 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:23:04.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.34:22-10.0.0.1:37716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:04.172492 systemd[1]: Started sshd@25-10.0.0.34:22-10.0.0.1:37716.service - OpenSSH per-connection server daemon (10.0.0.1:37716). Jan 16 21:23:04.186249 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:23:04.186500 kernel: audit: type=1130 audit(1768598584.172:900): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.34:22-10.0.0.1:37716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:04.433000 audit[6017]: USER_ACCT pid=6017 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:04.444430 sshd-session[6017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:23:04.450523 sshd[6017]: Accepted publickey for core from 10.0.0.1 port 37716 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:23:04.437000 audit[6017]: CRED_ACQ pid=6017 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:04.491556 systemd-logind[1570]: New session 27 of user core. Jan 16 21:23:04.535096 kernel: audit: type=1101 audit(1768598584.433:901): pid=6017 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:04.535231 kernel: audit: type=1103 audit(1768598584.437:902): pid=6017 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:04.538124 kernel: audit: type=1006 audit(1768598584.437:903): pid=6017 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 16 21:23:04.437000 audit[6017]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec097a060 a2=3 a3=0 items=0 ppid=1 pid=6017 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:23:04.629478 kernel: audit: type=1300 audit(1768598584.437:903): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec097a060 a2=3 a3=0 items=0 ppid=1 pid=6017 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:23:04.629610 kernel: audit: type=1327 audit(1768598584.437:903): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:23:04.437000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:23:04.647576 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 16 21:23:04.678000 audit[6017]: USER_START pid=6017 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:04.729092 kernel: audit: type=1105 audit(1768598584.678:904): pid=6017 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:04.729241 kernel: audit: type=1103 audit(1768598584.685:905): pid=6023 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:04.685000 audit[6023]: CRED_ACQ pid=6023 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:05.073171 kubelet[2889]: E0116 21:23:05.070578 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" podUID="ac06912f-e290-4031-a848-1392298fa9de" Jan 16 21:23:05.075732 kubelet[2889]: E0116 21:23:05.075098 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" podUID="f80ed623-af1a-45e7-a125-0c7c2229f592" Jan 16 21:23:05.090914 sshd[6023]: Connection closed by 10.0.0.1 port 37716 Jan 16 21:23:05.091066 sshd-session[6017]: pam_unix(sshd:session): session closed for user core Jan 16 21:23:05.094000 audit[6017]: USER_END pid=6017 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:05.102947 systemd[1]: sshd@25-10.0.0.34:22-10.0.0.1:37716.service: Deactivated successfully. Jan 16 21:23:05.107579 systemd[1]: session-27.scope: Deactivated successfully. Jan 16 21:23:05.113494 systemd-logind[1570]: Session 27 logged out. Waiting for processes to exit. Jan 16 21:23:05.116906 systemd-logind[1570]: Removed session 27. Jan 16 21:23:05.094000 audit[6017]: CRED_DISP pid=6017 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:05.152017 kernel: audit: type=1106 audit(1768598585.094:906): pid=6017 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:05.152081 kernel: audit: type=1104 audit(1768598585.094:907): pid=6017 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:05.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.34:22-10.0.0.1:37716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:08.063170 kubelet[2889]: E0116 21:23:08.063136 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:23:08.073998 kubelet[2889]: E0116 21:23:08.073864 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" podUID="7e12a227-9190-436c-a55f-74274779eb32" Jan 16 21:23:10.127595 systemd[1]: Started sshd@26-10.0.0.34:22-10.0.0.1:37718.service - OpenSSH per-connection server daemon (10.0.0.1:37718). Jan 16 21:23:10.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.34:22-10.0.0.1:37718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:10.136431 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:23:10.136566 kernel: audit: type=1130 audit(1768598590.127:909): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.34:22-10.0.0.1:37718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:10.360924 sshd[6063]: Accepted publickey for core from 10.0.0.1 port 37718 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:23:10.360000 audit[6063]: USER_ACCT pid=6063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:10.372239 sshd-session[6063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:23:10.402753 kernel: audit: type=1101 audit(1768598590.360:910): pid=6063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:10.364000 audit[6063]: CRED_ACQ pid=6063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:10.414520 systemd-logind[1570]: New session 28 of user core. Jan 16 21:23:10.441240 kernel: audit: type=1103 audit(1768598590.364:911): pid=6063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:10.364000 audit[6063]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff3f9bc940 a2=3 a3=0 items=0 ppid=1 pid=6063 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:23:10.506769 kernel: audit: type=1006 audit(1768598590.364:912): pid=6063 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 16 21:23:10.506909 kernel: audit: type=1300 audit(1768598590.364:912): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff3f9bc940 a2=3 a3=0 items=0 ppid=1 pid=6063 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:23:10.506963 kernel: audit: type=1327 audit(1768598590.364:912): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:23:10.364000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:23:10.506231 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 16 21:23:10.524000 audit[6063]: USER_START pid=6063 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:10.571588 kernel: audit: type=1105 audit(1768598590.524:913): pid=6063 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:10.530000 audit[6067]: CRED_ACQ pid=6067 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:10.621988 kernel: audit: type=1103 audit(1768598590.530:914): pid=6067 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:10.896120 sshd[6067]: Connection closed by 10.0.0.1 port 37718 Jan 16 21:23:10.896721 sshd-session[6063]: pam_unix(sshd:session): session closed for user core Jan 16 21:23:10.900000 audit[6063]: USER_END pid=6063 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:10.913000 audit[6063]: CRED_DISP pid=6063 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:10.968025 systemd[1]: sshd@26-10.0.0.34:22-10.0.0.1:37718.service: Deactivated successfully. Jan 16 21:23:10.973746 systemd[1]: session-28.scope: Deactivated successfully. Jan 16 21:23:10.992761 systemd-logind[1570]: Session 28 logged out. Waiting for processes to exit. Jan 16 21:23:10.995740 kernel: audit: type=1106 audit(1768598590.900:915): pid=6063 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:10.996005 kernel: audit: type=1104 audit(1768598590.913:916): pid=6063 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:10.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.34:22-10.0.0.1:37718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:10.998114 systemd-logind[1570]: Removed session 28. Jan 16 21:23:11.076915 kubelet[2889]: E0116 21:23:11.076848 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dgkw9" podUID="91f2e5a0-0976-4b7a-ac63-530715dff408" Jan 16 21:23:11.083764 kubelet[2889]: E0116 21:23:11.083602 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77c5fd8d7d-qxwpz" podUID="a053e2c3-3297-4e1f-bf5a-da2b545cc5db" Jan 16 21:23:13.074717 kubelet[2889]: E0116 21:23:13.074133 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:23:15.954511 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:23:15.954737 kernel: audit: type=1130 audit(1768598595.936:918): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.34:22-10.0.0.1:33676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:15.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.34:22-10.0.0.1:33676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:15.936856 systemd[1]: Started sshd@27-10.0.0.34:22-10.0.0.1:33676.service - OpenSSH per-connection server daemon (10.0.0.1:33676). Jan 16 21:23:16.166000 audit[6085]: USER_ACCT pid=6085 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:16.172533 sshd[6085]: Accepted publickey for core from 10.0.0.1 port 33676 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:23:16.176828 sshd-session[6085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:23:16.196725 systemd-logind[1570]: New session 29 of user core. Jan 16 21:23:16.172000 audit[6085]: CRED_ACQ pid=6085 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:16.265590 kernel: audit: type=1101 audit(1768598596.166:919): pid=6085 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:16.265863 kernel: audit: type=1103 audit(1768598596.172:920): pid=6085 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:16.289965 kernel: audit: type=1006 audit(1768598596.173:921): pid=6085 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jan 16 21:23:16.290078 kernel: audit: type=1300 audit(1768598596.173:921): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc581fc2e0 a2=3 a3=0 items=0 ppid=1 pid=6085 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:23:16.173000 audit[6085]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc581fc2e0 a2=3 a3=0 items=0 ppid=1 pid=6085 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:23:16.291780 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 16 21:23:16.330015 kernel: audit: type=1327 audit(1768598596.173:921): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:23:16.173000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:23:16.344815 kernel: audit: type=1105 audit(1768598596.312:922): pid=6085 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:16.312000 audit[6085]: USER_START pid=6085 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:16.320000 audit[6089]: CRED_ACQ pid=6089 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:16.436976 kernel: audit: type=1103 audit(1768598596.320:923): pid=6089 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:16.813993 sshd[6089]: Connection closed by 10.0.0.1 port 33676 Jan 16 21:23:16.814040 sshd-session[6085]: pam_unix(sshd:session): session closed for user core Jan 16 21:23:16.823000 audit[6085]: USER_END pid=6085 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:16.837922 systemd[1]: sshd@27-10.0.0.34:22-10.0.0.1:33676.service: Deactivated successfully. Jan 16 21:23:16.848851 systemd[1]: session-29.scope: Deactivated successfully. Jan 16 21:23:16.861194 systemd-logind[1570]: Session 29 logged out. Waiting for processes to exit. Jan 16 21:23:16.868140 systemd-logind[1570]: Removed session 29. Jan 16 21:23:16.887540 kernel: audit: type=1106 audit(1768598596.823:924): pid=6085 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:16.823000 audit[6085]: CRED_DISP pid=6085 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:16.939620 kernel: audit: type=1104 audit(1768598596.823:925): pid=6085 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:16.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.34:22-10.0.0.1:33676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:19.066112 kubelet[2889]: E0116 21:23:19.061951 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" podUID="ac06912f-e290-4031-a848-1392298fa9de" Jan 16 21:23:20.092462 kubelet[2889]: E0116 21:23:20.084591 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" podUID="f80ed623-af1a-45e7-a125-0c7c2229f592" Jan 16 21:23:21.070241 kubelet[2889]: E0116 21:23:21.068136 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" podUID="7e12a227-9190-436c-a55f-74274779eb32" Jan 16 21:23:21.844615 systemd[1]: Started sshd@28-10.0.0.34:22-10.0.0.1:33684.service - OpenSSH per-connection server daemon (10.0.0.1:33684). Jan 16 21:23:21.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.34:22-10.0.0.1:33684 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:21.858102 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:23:21.858205 kernel: audit: type=1130 audit(1768598601.843:927): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.34:22-10.0.0.1:33684 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:22.108000 audit[6105]: USER_ACCT pid=6105 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:22.116438 sshd[6105]: Accepted publickey for core from 10.0.0.1 port 33684 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:23:22.125025 sshd-session[6105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:23:22.180844 kernel: audit: type=1101 audit(1768598602.108:928): pid=6105 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:22.180801 systemd-logind[1570]: New session 30 of user core. Jan 16 21:23:22.120000 audit[6105]: CRED_ACQ pid=6105 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:22.219886 kernel: audit: type=1103 audit(1768598602.120:929): pid=6105 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:22.220002 kernel: audit: type=1006 audit(1768598602.120:930): pid=6105 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Jan 16 21:23:22.120000 audit[6105]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff7636ecf0 a2=3 a3=0 items=0 ppid=1 pid=6105 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:23:22.120000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:23:22.304010 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 16 21:23:22.318169 kernel: audit: type=1300 audit(1768598602.120:930): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff7636ecf0 a2=3 a3=0 items=0 ppid=1 pid=6105 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:23:22.318235 kernel: audit: type=1327 audit(1768598602.120:930): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:23:22.323150 kernel: audit: type=1105 audit(1768598602.317:931): pid=6105 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:22.317000 audit[6105]: USER_START pid=6105 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:22.319000 audit[6109]: CRED_ACQ pid=6109 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:22.429868 kernel: audit: type=1103 audit(1768598602.319:932): pid=6109 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:22.718475 sshd[6109]: Connection closed by 10.0.0.1 port 33684 Jan 16 21:23:22.718843 sshd-session[6105]: pam_unix(sshd:session): session closed for user core Jan 16 21:23:22.727000 audit[6105]: USER_END pid=6105 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:22.737087 systemd[1]: sshd@28-10.0.0.34:22-10.0.0.1:33684.service: Deactivated successfully. Jan 16 21:23:22.747162 systemd[1]: session-30.scope: Deactivated successfully. Jan 16 21:23:22.762759 systemd-logind[1570]: Session 30 logged out. Waiting for processes to exit. Jan 16 21:23:22.768175 systemd-logind[1570]: Removed session 30. Jan 16 21:23:22.793603 kernel: audit: type=1106 audit(1768598602.727:933): pid=6105 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:22.793854 kernel: audit: type=1104 audit(1768598602.729:934): pid=6105 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:22.729000 audit[6105]: CRED_DISP pid=6105 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:22.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.34:22-10.0.0.1:33684 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:24.106922 kubelet[2889]: E0116 21:23:24.100250 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77c5fd8d7d-qxwpz" podUID="a053e2c3-3297-4e1f-bf5a-da2b545cc5db" Jan 16 21:23:25.100211 kubelet[2889]: E0116 21:23:25.097460 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dgkw9" podUID="91f2e5a0-0976-4b7a-ac63-530715dff408" Jan 16 21:23:25.110742 kubelet[2889]: E0116 21:23:25.108924 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:23:27.878929 systemd[1]: Started sshd@29-10.0.0.34:22-10.0.0.1:58328.service - OpenSSH per-connection server daemon (10.0.0.1:58328). Jan 16 21:23:27.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.34:22-10.0.0.1:58328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:27.896106 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:23:27.896209 kernel: audit: type=1130 audit(1768598607.877:936): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.34:22-10.0.0.1:58328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:28.690000 audit[6122]: USER_ACCT pid=6122 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:28.712198 sshd[6122]: Accepted publickey for core from 10.0.0.1 port 58328 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:23:28.739956 sshd-session[6122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:23:28.726000 audit[6122]: CRED_ACQ pid=6122 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:28.817242 kernel: audit: type=1101 audit(1768598608.690:937): pid=6122 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:28.817533 kernel: audit: type=1103 audit(1768598608.726:938): pid=6122 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:28.825951 kernel: audit: type=1006 audit(1768598608.726:939): pid=6122 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=31 res=1 Jan 16 21:23:28.835978 systemd-logind[1570]: New session 31 of user core. Jan 16 21:23:28.726000 audit[6122]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc0b107990 a2=3 a3=0 items=0 ppid=1 pid=6122 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:23:28.726000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:23:29.014029 kernel: audit: type=1300 audit(1768598608.726:939): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc0b107990 a2=3 a3=0 items=0 ppid=1 pid=6122 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:23:29.014125 kernel: audit: type=1327 audit(1768598608.726:939): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:23:29.022180 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 16 21:23:29.072000 audit[6122]: USER_START pid=6122 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:29.240025 kernel: audit: type=1105 audit(1768598609.072:940): pid=6122 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:29.099000 audit[6126]: CRED_ACQ pid=6126 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:29.365831 kernel: audit: type=1103 audit(1768598609.099:941): pid=6126 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:30.858859 sshd[6126]: Connection closed by 10.0.0.1 port 58328 Jan 16 21:23:30.863827 sshd-session[6122]: pam_unix(sshd:session): session closed for user core Jan 16 21:23:30.896000 audit[6122]: USER_END pid=6122 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:30.912956 systemd-logind[1570]: Session 31 logged out. Waiting for processes to exit. Jan 16 21:23:30.929220 systemd[1]: sshd@29-10.0.0.34:22-10.0.0.1:58328.service: Deactivated successfully. Jan 16 21:23:30.943886 systemd[1]: session-31.scope: Deactivated successfully. Jan 16 21:23:30.965532 systemd-logind[1570]: Removed session 31. Jan 16 21:23:31.120606 kernel: audit: type=1106 audit(1768598610.896:942): pid=6122 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:31.143140 kernel: audit: type=1104 audit(1768598610.897:943): pid=6122 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:30.897000 audit[6122]: CRED_DISP pid=6122 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:30.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.34:22-10.0.0.1:58328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:32.087838 kubelet[2889]: E0116 21:23:32.086029 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" podUID="ac06912f-e290-4031-a848-1392298fa9de" Jan 16 21:23:33.098235 kubelet[2889]: E0116 21:23:33.097148 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" podUID="7e12a227-9190-436c-a55f-74274779eb32" Jan 16 21:23:33.101874 kubelet[2889]: E0116 21:23:33.100625 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" podUID="f80ed623-af1a-45e7-a125-0c7c2229f592" Jan 16 21:23:36.070577 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:23:36.071186 kernel: audit: type=1130 audit(1768598616.005:945): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.34:22-10.0.0.1:34870 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:36.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.34:22-10.0.0.1:34870 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:36.002201 systemd[1]: Started sshd@30-10.0.0.34:22-10.0.0.1:34870.service - OpenSSH per-connection server daemon (10.0.0.1:34870). Jan 16 21:23:36.157552 kubelet[2889]: E0116 21:23:36.139202 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:23:36.210597 kubelet[2889]: E0116 21:23:36.207205 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:23:37.115234 kubelet[2889]: E0116 21:23:37.112919 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:23:37.718000 audit[6143]: USER_ACCT pid=6143 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:37.889128 sshd-session[6143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:23:37.905948 sshd[6143]: Accepted publickey for core from 10.0.0.1 port 34870 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:23:37.913002 kernel: audit: type=1101 audit(1768598617.718:946): pid=6143 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:37.913056 kernel: audit: type=1103 audit(1768598617.837:947): pid=6143 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:37.837000 audit[6143]: CRED_ACQ pid=6143 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:38.072053 kernel: audit: type=1006 audit(1768598617.842:948): pid=6143 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=32 res=1 Jan 16 21:23:38.072781 kernel: audit: type=1300 audit(1768598617.842:948): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc56d48240 a2=3 a3=0 items=0 ppid=1 pid=6143 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:23:37.842000 audit[6143]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc56d48240 a2=3 a3=0 items=0 ppid=1 pid=6143 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:23:38.108172 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 16 21:23:38.202223 kernel: audit: type=1327 audit(1768598617.842:948): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:23:37.842000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:23:38.204821 systemd-logind[1570]: New session 32 of user core. Jan 16 21:23:38.260000 audit[6143]: USER_START pid=6143 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:38.367234 kernel: audit: type=1105 audit(1768598618.260:949): pid=6143 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:38.303000 audit[6165]: CRED_ACQ pid=6165 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:38.535131 kernel: audit: type=1103 audit(1768598618.303:950): pid=6165 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:39.163562 kubelet[2889]: E0116 21:23:39.163208 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77c5fd8d7d-qxwpz" podUID="a053e2c3-3297-4e1f-bf5a-da2b545cc5db" Jan 16 21:23:39.184582 kubelet[2889]: E0116 21:23:39.122630 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dgkw9" podUID="91f2e5a0-0976-4b7a-ac63-530715dff408" Jan 16 21:23:40.199004 sshd[6165]: Connection closed by 10.0.0.1 port 34870 Jan 16 21:23:40.197907 sshd-session[6143]: pam_unix(sshd:session): session closed for user core Jan 16 21:23:40.202000 audit[6143]: USER_END pid=6143 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:40.335205 systemd[1]: sshd@30-10.0.0.34:22-10.0.0.1:34870.service: Deactivated successfully. Jan 16 21:23:40.348998 kernel: audit: type=1106 audit(1768598620.202:951): pid=6143 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:40.202000 audit[6143]: CRED_DISP pid=6143 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:40.444036 kernel: audit: type=1104 audit(1768598620.202:952): pid=6143 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:40.419242 systemd[1]: session-32.scope: Deactivated successfully. Jan 16 21:23:40.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.34:22-10.0.0.1:34870 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:40.488086 systemd-logind[1570]: Session 32 logged out. Waiting for processes to exit. Jan 16 21:23:40.502597 systemd-logind[1570]: Removed session 32. Jan 16 21:23:41.068898 kubelet[2889]: E0116 21:23:41.062130 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:23:43.600056 containerd[1592]: time="2026-01-16T21:23:43.585092183Z" level=info msg="container event discarded" container=8fcb1fb81dd9d07259aeb8924064bcb7cc3c4e10a7afc68f65a6c19e9f573024 type=CONTAINER_CREATED_EVENT Jan 16 21:23:43.632881 containerd[1592]: time="2026-01-16T21:23:43.632818325Z" level=info msg="container event discarded" container=8fcb1fb81dd9d07259aeb8924064bcb7cc3c4e10a7afc68f65a6c19e9f573024 type=CONTAINER_STARTED_EVENT Jan 16 21:23:43.633178 containerd[1592]: time="2026-01-16T21:23:43.633083810Z" level=info msg="container event discarded" container=433f28f69dda3008a3e6d7c1187fab536fd6264fe34d4bfce71ab515d464cbab type=CONTAINER_CREATED_EVENT Jan 16 21:23:43.633178 containerd[1592]: time="2026-01-16T21:23:43.633109357Z" level=info msg="container event discarded" container=433f28f69dda3008a3e6d7c1187fab536fd6264fe34d4bfce71ab515d464cbab type=CONTAINER_STARTED_EVENT Jan 16 21:23:43.633178 containerd[1592]: time="2026-01-16T21:23:43.633121449Z" level=info msg="container event discarded" container=2fabdf91645dbbc66eaf02214742d15b132773e267f7e46ccc205741a4f6e8b8 type=CONTAINER_CREATED_EVENT Jan 16 21:23:43.633178 containerd[1592]: time="2026-01-16T21:23:43.633133622Z" level=info msg="container event discarded" container=2fabdf91645dbbc66eaf02214742d15b132773e267f7e46ccc205741a4f6e8b8 type=CONTAINER_STARTED_EVENT Jan 16 21:23:43.725205 containerd[1592]: time="2026-01-16T21:23:43.722191304Z" level=info msg="container event discarded" container=46d9793cd514cdcb451e24bf4f8f6c415a73afbeff686a053da07efe5dda71c0 type=CONTAINER_CREATED_EVENT Jan 16 21:23:43.746165 containerd[1592]: time="2026-01-16T21:23:43.746106487Z" level=info msg="container event discarded" container=b6756513308ec6e6c6496d420bff733fcbe83aebf983d8249a7e97a2f184bfc2 type=CONTAINER_CREATED_EVENT Jan 16 21:23:43.786864 containerd[1592]: time="2026-01-16T21:23:43.786586263Z" level=info msg="container event discarded" container=734a6dbe22161be945e5fb61f5c369e623535b817de15fa0572ddb43c292ff81 type=CONTAINER_CREATED_EVENT Jan 16 21:23:44.088045 kubelet[2889]: E0116 21:23:44.087837 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" podUID="ac06912f-e290-4031-a848-1392298fa9de" Jan 16 21:23:44.113017 containerd[1592]: time="2026-01-16T21:23:44.103086182Z" level=info msg="container event discarded" container=46d9793cd514cdcb451e24bf4f8f6c415a73afbeff686a053da07efe5dda71c0 type=CONTAINER_STARTED_EVENT Jan 16 21:23:44.140246 containerd[1592]: time="2026-01-16T21:23:44.140177948Z" level=info msg="container event discarded" container=734a6dbe22161be945e5fb61f5c369e623535b817de15fa0572ddb43c292ff81 type=CONTAINER_STARTED_EVENT Jan 16 21:23:44.178949 containerd[1592]: time="2026-01-16T21:23:44.178879673Z" level=info msg="container event discarded" container=b6756513308ec6e6c6496d420bff733fcbe83aebf983d8249a7e97a2f184bfc2 type=CONTAINER_STARTED_EVENT Jan 16 21:23:45.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.34:22-10.0.0.1:52592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:45.363888 systemd[1]: Started sshd@31-10.0.0.34:22-10.0.0.1:52592.service - OpenSSH per-connection server daemon (10.0.0.1:52592). Jan 16 21:23:45.745214 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:23:45.745873 kernel: audit: type=1130 audit(1768598625.362:954): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.34:22-10.0.0.1:52592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:46.166022 kubelet[2889]: E0116 21:23:46.165971 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" podUID="f80ed623-af1a-45e7-a125-0c7c2229f592" Jan 16 21:23:47.577000 audit[6186]: USER_ACCT pid=6186 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:47.658996 sshd[6186]: Accepted publickey for core from 10.0.0.1 port 52592 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:23:47.884822 kernel: audit: type=1101 audit(1768598627.577:955): pid=6186 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:47.882000 audit[6186]: CRED_ACQ pid=6186 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:47.920484 sshd-session[6186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:23:48.032035 systemd-logind[1570]: New session 33 of user core. Jan 16 21:23:48.040002 kernel: audit: type=1103 audit(1768598627.882:956): pid=6186 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:48.138180 kernel: audit: type=1006 audit(1768598627.891:957): pid=6186 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=33 res=1 Jan 16 21:23:48.142184 kubelet[2889]: E0116 21:23:48.131919 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" podUID="7e12a227-9190-436c-a55f-74274779eb32" Jan 16 21:23:48.302001 kernel: audit: type=1300 audit(1768598627.891:957): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd59a3ab0 a2=3 a3=0 items=0 ppid=1 pid=6186 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:23:47.891000 audit[6186]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd59a3ab0 a2=3 a3=0 items=0 ppid=1 pid=6186 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:23:48.152045 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 16 21:23:47.891000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:23:48.408899 kernel: audit: type=1327 audit(1768598627.891:957): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:23:48.317000 audit[6186]: USER_START pid=6186 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:48.675621 kernel: audit: type=1105 audit(1768598628.317:958): pid=6186 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:48.351000 audit[6191]: CRED_ACQ pid=6191 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:48.879115 kernel: audit: type=1103 audit(1768598628.351:959): pid=6191 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:50.154199 kubelet[2889]: E0116 21:23:50.154152 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:23:50.202597 kubelet[2889]: E0116 21:23:50.192563 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:23:51.088232 containerd[1592]: time="2026-01-16T21:23:51.082616875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 21:23:51.102636 kubelet[2889]: E0116 21:23:51.096644 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dgkw9" podUID="91f2e5a0-0976-4b7a-ac63-530715dff408" Jan 16 21:23:51.459952 containerd[1592]: time="2026-01-16T21:23:51.441614883Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:23:51.525876 containerd[1592]: time="2026-01-16T21:23:51.525612547Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 21:23:51.530577 containerd[1592]: time="2026-01-16T21:23:51.530160469Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 16 21:23:51.537089 kubelet[2889]: E0116 21:23:51.531178 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 21:23:51.537089 kubelet[2889]: E0116 21:23:51.532945 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 21:23:51.545599 kubelet[2889]: E0116 21:23:51.545234 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-77c5fd8d7d-qxwpz_calico-system(a053e2c3-3297-4e1f-bf5a-da2b545cc5db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 21:23:51.702048 containerd[1592]: time="2026-01-16T21:23:51.669627914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 21:23:52.129601 sshd[6191]: Connection closed by 10.0.0.1 port 52592 Jan 16 21:23:52.190000 audit[6186]: USER_END pid=6186 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:52.120055 sshd-session[6186]: pam_unix(sshd:session): session closed for user core Jan 16 21:23:52.224023 containerd[1592]: time="2026-01-16T21:23:52.184513172Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:23:52.243152 systemd[1]: sshd@31-10.0.0.34:22-10.0.0.1:52592.service: Deactivated successfully. Jan 16 21:23:52.257060 containerd[1592]: time="2026-01-16T21:23:52.247613207Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 21:23:52.257060 containerd[1592]: time="2026-01-16T21:23:52.247915160Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 16 21:23:52.318234 kubelet[2889]: E0116 21:23:52.295209 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 21:23:52.318234 kubelet[2889]: E0116 21:23:52.296028 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 21:23:52.318234 kubelet[2889]: E0116 21:23:52.296121 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-77c5fd8d7d-qxwpz_calico-system(a053e2c3-3297-4e1f-bf5a-da2b545cc5db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 21:23:52.318234 kubelet[2889]: E0116 21:23:52.296183 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77c5fd8d7d-qxwpz" podUID="a053e2c3-3297-4e1f-bf5a-da2b545cc5db" Jan 16 21:23:52.456966 systemd[1]: session-33.scope: Deactivated successfully. Jan 16 21:23:52.591247 kernel: audit: type=1106 audit(1768598632.190:960): pid=6186 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:52.528908 systemd-logind[1570]: Session 33 logged out. Waiting for processes to exit. Jan 16 21:23:52.533207 systemd-logind[1570]: Removed session 33. Jan 16 21:23:52.193000 audit[6186]: CRED_DISP pid=6186 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:52.820856 kernel: audit: type=1104 audit(1768598632.193:961): pid=6186 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:52.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.34:22-10.0.0.1:52592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:53.078444 kernel: audit: type=1131 audit(1768598632.268:962): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.34:22-10.0.0.1:52592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:56.919848 kubelet[2889]: E0116 21:23:56.910884 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" podUID="ac06912f-e290-4031-a848-1392298fa9de" Jan 16 21:23:57.066932 kubelet[2889]: E0116 21:23:57.066895 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:23:57.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.34:22-10.0.0.1:50782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:57.429558 systemd[1]: Started sshd@32-10.0.0.34:22-10.0.0.1:50782.service - OpenSSH per-connection server daemon (10.0.0.1:50782). Jan 16 21:23:57.738045 kernel: audit: type=1130 audit(1768598637.433:963): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.34:22-10.0.0.1:50782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:59.195178 kubelet[2889]: E0116 21:23:59.194144 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" podUID="f80ed623-af1a-45e7-a125-0c7c2229f592" Jan 16 21:23:59.282000 audit[6213]: USER_ACCT pid=6213 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:59.324085 sshd[6213]: Accepted publickey for core from 10.0.0.1 port 50782 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:23:59.346085 sshd-session[6213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:23:59.503231 kernel: audit: type=1101 audit(1768598639.282:964): pid=6213 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:59.321000 audit[6213]: CRED_ACQ pid=6213 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:59.606086 systemd-logind[1570]: New session 34 of user core. Jan 16 21:23:59.754863 kernel: audit: type=1103 audit(1768598639.321:965): pid=6213 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:59.936105 kernel: audit: type=1006 audit(1768598639.321:966): pid=6213 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=34 res=1 Jan 16 21:23:59.946083 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 16 21:23:59.321000 audit[6213]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff1ad5eb80 a2=3 a3=0 items=0 ppid=1 pid=6213 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:00.210645 kernel: audit: type=1300 audit(1768598639.321:966): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff1ad5eb80 a2=3 a3=0 items=0 ppid=1 pid=6213 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:23:59.321000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:24:00.068000 audit[6213]: USER_START pid=6213 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:00.551248 kernel: audit: type=1327 audit(1768598639.321:966): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:24:00.567982 kernel: audit: type=1105 audit(1768598640.068:967): pid=6213 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:00.105000 audit[6217]: CRED_ACQ pid=6217 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:00.703945 kernel: audit: type=1103 audit(1768598640.105:968): pid=6217 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:01.125637 kubelet[2889]: E0116 21:24:01.108047 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" podUID="7e12a227-9190-436c-a55f-74274779eb32" Jan 16 21:24:02.503239 containerd[1592]: time="2026-01-16T21:24:02.488914536Z" level=info msg="container event discarded" container=00e37dcca0de6c16db377ca32b1c67f287fbdd2e3392bfff36a1952720aabc31 type=CONTAINER_CREATED_EVENT Jan 16 21:24:02.503239 containerd[1592]: time="2026-01-16T21:24:02.488976392Z" level=info msg="container event discarded" container=00e37dcca0de6c16db377ca32b1c67f287fbdd2e3392bfff36a1952720aabc31 type=CONTAINER_STARTED_EVENT Jan 16 21:24:02.753942 containerd[1592]: time="2026-01-16T21:24:02.748229095Z" level=info msg="container event discarded" container=adfc1a82e365df08cc65dcd28c3732f8f3e39c8ca8c6a526b9e0424975acb9ee type=CONTAINER_CREATED_EVENT Jan 16 21:24:02.753942 containerd[1592]: time="2026-01-16T21:24:02.753619226Z" level=info msg="container event discarded" container=af23b6414fffa27b1d181bc5ab66148a7ad6cec6e057b45661e0dee4f546dc4d type=CONTAINER_CREATED_EVENT Jan 16 21:24:02.753942 containerd[1592]: time="2026-01-16T21:24:02.753639935Z" level=info msg="container event discarded" container=af23b6414fffa27b1d181bc5ab66148a7ad6cec6e057b45661e0dee4f546dc4d type=CONTAINER_STARTED_EVENT Jan 16 21:24:03.144930 kubelet[2889]: E0116 21:24:03.141878 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:24:03.341177 containerd[1592]: time="2026-01-16T21:24:03.333989350Z" level=info msg="container event discarded" container=adfc1a82e365df08cc65dcd28c3732f8f3e39c8ca8c6a526b9e0424975acb9ee type=CONTAINER_STARTED_EVENT Jan 16 21:24:03.516110 sshd[6217]: Connection closed by 10.0.0.1 port 50782 Jan 16 21:24:03.501651 sshd-session[6213]: pam_unix(sshd:session): session closed for user core Jan 16 21:24:03.628000 audit[6213]: USER_END pid=6213 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:03.670081 systemd[1]: sshd@32-10.0.0.34:22-10.0.0.1:50782.service: Deactivated successfully. Jan 16 21:24:03.815133 systemd[1]: session-34.scope: Deactivated successfully. Jan 16 21:24:03.824168 systemd-logind[1570]: Session 34 logged out. Waiting for processes to exit. Jan 16 21:24:03.831959 systemd-logind[1570]: Removed session 34. Jan 16 21:24:03.897884 kernel: audit: type=1106 audit(1768598643.628:969): pid=6213 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:03.629000 audit[6213]: CRED_DISP pid=6213 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:04.032221 kernel: audit: type=1104 audit(1768598643.629:970): pid=6213 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:03.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.34:22-10.0.0.1:50782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:04.182208 kernel: audit: type=1131 audit(1768598643.670:971): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.34:22-10.0.0.1:50782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:06.129130 kubelet[2889]: E0116 21:24:06.125233 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dgkw9" podUID="91f2e5a0-0976-4b7a-ac63-530715dff408" Jan 16 21:24:07.350505 kubelet[2889]: E0116 21:24:07.345099 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77c5fd8d7d-qxwpz" podUID="a053e2c3-3297-4e1f-bf5a-da2b545cc5db" Jan 16 21:24:08.518614 kubelet[2889]: E0116 21:24:08.517224 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:24:08.518614 kubelet[2889]: E0116 21:24:08.517920 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" podUID="ac06912f-e290-4031-a848-1392298fa9de" Jan 16 21:24:08.695629 kernel: audit: type=1130 audit(1768598648.590:972): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.34:22-10.0.0.1:37284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:08.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.34:22-10.0.0.1:37284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:08.592098 systemd[1]: Started sshd@33-10.0.0.34:22-10.0.0.1:37284.service - OpenSSH per-connection server daemon (10.0.0.1:37284). Jan 16 21:24:10.242211 sshd[6254]: Accepted publickey for core from 10.0.0.1 port 37284 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:24:10.240000 audit[6254]: USER_ACCT pid=6254 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:10.349142 sshd-session[6254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:24:10.379930 kernel: audit: type=1101 audit(1768598650.240:973): pid=6254 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:10.307000 audit[6254]: CRED_ACQ pid=6254 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:10.549873 kernel: audit: type=1103 audit(1768598650.307:974): pid=6254 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:10.498014 systemd-logind[1570]: New session 35 of user core. Jan 16 21:24:10.620166 kernel: audit: type=1006 audit(1768598650.307:975): pid=6254 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=35 res=1 Jan 16 21:24:10.593119 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 16 21:24:10.307000 audit[6254]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffceab1acc0 a2=3 a3=0 items=0 ppid=1 pid=6254 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:10.876053 kernel: audit: type=1300 audit(1768598650.307:975): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffceab1acc0 a2=3 a3=0 items=0 ppid=1 pid=6254 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:10.928596 kernel: audit: type=1327 audit(1768598650.307:975): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:24:10.307000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:24:10.683000 audit[6254]: USER_START pid=6254 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:11.046014 kernel: audit: type=1105 audit(1768598650.683:976): pid=6254 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:11.348030 kernel: audit: type=1103 audit(1768598650.692:977): pid=6263 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:10.692000 audit[6263]: CRED_ACQ pid=6263 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:12.765895 containerd[1592]: time="2026-01-16T21:24:12.731214102Z" level=info msg="container event discarded" container=8e0c462898bed64b529b04f278dd2a035170639128a1c8a73460f6aaba227e9f type=CONTAINER_CREATED_EVENT Jan 16 21:24:13.342000 audit[6254]: USER_END pid=6254 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:13.349020 sshd[6263]: Connection closed by 10.0.0.1 port 37284 Jan 16 21:24:13.333237 sshd-session[6254]: pam_unix(sshd:session): session closed for user core Jan 16 21:24:13.511987 kernel: audit: type=1106 audit(1768598653.342:978): pid=6254 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:13.378000 audit[6254]: CRED_DISP pid=6254 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:13.514919 systemd[1]: sshd@33-10.0.0.34:22-10.0.0.1:37284.service: Deactivated successfully. Jan 16 21:24:13.525940 systemd[1]: session-35.scope: Deactivated successfully. Jan 16 21:24:13.557225 systemd-logind[1570]: Session 35 logged out. Waiting for processes to exit. Jan 16 21:24:13.653233 kernel: audit: type=1104 audit(1768598653.378:979): pid=6254 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:13.653977 kernel: audit: type=1131 audit(1768598653.514:980): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.34:22-10.0.0.1:37284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:13.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.34:22-10.0.0.1:37284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:13.699887 containerd[1592]: time="2026-01-16T21:24:13.635572806Z" level=info msg="container event discarded" container=8e0c462898bed64b529b04f278dd2a035170639128a1c8a73460f6aaba227e9f type=CONTAINER_STARTED_EVENT Jan 16 21:24:13.631596 systemd-logind[1570]: Removed session 35. Jan 16 21:24:14.322630 containerd[1592]: time="2026-01-16T21:24:14.321225904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 21:24:15.265170 containerd[1592]: time="2026-01-16T21:24:15.264933890Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:24:15.391944 containerd[1592]: time="2026-01-16T21:24:15.390072077Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 21:24:15.391944 containerd[1592]: time="2026-01-16T21:24:15.390188214Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 16 21:24:15.410901 kubelet[2889]: E0116 21:24:15.404155 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 21:24:15.410901 kubelet[2889]: E0116 21:24:15.404228 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 21:24:15.410901 kubelet[2889]: E0116 21:24:15.405043 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7c7c579b5f-dsr85_calico-system(f80ed623-af1a-45e7-a125-0c7c2229f592): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 21:24:15.410901 kubelet[2889]: E0116 21:24:15.405091 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" podUID="f80ed623-af1a-45e7-a125-0c7c2229f592" Jan 16 21:24:15.449852 containerd[1592]: time="2026-01-16T21:24:15.445649177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 21:24:15.667595 containerd[1592]: time="2026-01-16T21:24:15.655050662Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:24:15.700860 containerd[1592]: time="2026-01-16T21:24:15.700638879Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 16 21:24:15.705644 containerd[1592]: time="2026-01-16T21:24:15.704882091Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 21:24:15.711051 kubelet[2889]: E0116 21:24:15.709096 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 21:24:15.716158 kubelet[2889]: E0116 21:24:15.710242 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 21:24:15.722046 kubelet[2889]: E0116 21:24:15.716247 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-6hngd_calico-system(b99000d7-a136-4299-82d0-76fa7e3c28f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 21:24:15.773248 containerd[1592]: time="2026-01-16T21:24:15.773073931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 21:24:16.096990 containerd[1592]: time="2026-01-16T21:24:16.096029619Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:24:16.180221 containerd[1592]: time="2026-01-16T21:24:16.176251270Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 21:24:16.180221 containerd[1592]: time="2026-01-16T21:24:16.178013994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 16 21:24:16.239045 kubelet[2889]: E0116 21:24:16.204173 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 21:24:16.239045 kubelet[2889]: E0116 21:24:16.204241 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 21:24:16.239045 kubelet[2889]: E0116 21:24:16.206917 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-6hngd_calico-system(b99000d7-a136-4299-82d0-76fa7e3c28f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 21:24:16.239045 kubelet[2889]: E0116 21:24:16.207192 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:24:16.248244 containerd[1592]: time="2026-01-16T21:24:16.225162794Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 21:24:16.723048 containerd[1592]: time="2026-01-16T21:24:16.718648577Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:24:16.971628 containerd[1592]: time="2026-01-16T21:24:16.966248616Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 21:24:16.971628 containerd[1592]: time="2026-01-16T21:24:16.967166346Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 21:24:16.972054 kubelet[2889]: E0116 21:24:16.967939 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:24:16.972054 kubelet[2889]: E0116 21:24:16.967997 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:24:16.972054 kubelet[2889]: E0116 21:24:16.968082 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6cb5984987-cb6w2_calico-apiserver(7e12a227-9190-436c-a55f-74274779eb32): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 21:24:16.972054 kubelet[2889]: E0116 21:24:16.968126 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" podUID="7e12a227-9190-436c-a55f-74274779eb32" Jan 16 21:24:18.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.34:22-10.0.0.1:48688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:18.466226 systemd[1]: Started sshd@34-10.0.0.34:22-10.0.0.1:48688.service - OpenSSH per-connection server daemon (10.0.0.1:48688). Jan 16 21:24:18.616217 kernel: audit: type=1130 audit(1768598658.467:981): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.34:22-10.0.0.1:48688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:19.850084 sshd[6299]: Accepted publickey for core from 10.0.0.1 port 48688 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:24:19.839231 sshd-session[6299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:24:19.824000 audit[6299]: USER_ACCT pid=6299 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:20.029242 kernel: audit: type=1101 audit(1768598659.824:982): pid=6299 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:19.834000 audit[6299]: CRED_ACQ pid=6299 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:20.139950 systemd-logind[1570]: New session 36 of user core. Jan 16 21:24:20.298642 containerd[1592]: time="2026-01-16T21:24:20.290218300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 21:24:20.358524 kernel: audit: type=1103 audit(1768598659.834:983): pid=6299 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:20.358663 kernel: audit: type=1006 audit(1768598659.834:984): pid=6299 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=36 res=1 Jan 16 21:24:19.834000 audit[6299]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc47d84940 a2=3 a3=0 items=0 ppid=1 pid=6299 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=36 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:20.425646 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 16 21:24:20.688002 kubelet[2889]: E0116 21:24:20.394161 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77c5fd8d7d-qxwpz" podUID="a053e2c3-3297-4e1f-bf5a-da2b545cc5db" Jan 16 21:24:20.740238 kernel: audit: type=1300 audit(1768598659.834:984): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc47d84940 a2=3 a3=0 items=0 ppid=1 pid=6299 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=36 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:20.740919 kernel: audit: type=1327 audit(1768598659.834:984): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:24:19.834000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:24:20.813943 containerd[1592]: time="2026-01-16T21:24:20.766659743Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:24:20.971230 kernel: audit: type=1105 audit(1768598660.814:985): pid=6299 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:20.814000 audit[6299]: USER_START pid=6299 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:20.998637 containerd[1592]: time="2026-01-16T21:24:20.997161250Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 21:24:21.001081 containerd[1592]: time="2026-01-16T21:24:21.001048492Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 21:24:21.003073 kubelet[2889]: E0116 21:24:21.003031 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:24:21.003248 kubelet[2889]: E0116 21:24:21.003224 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:24:21.004986 kubelet[2889]: E0116 21:24:21.003944 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6cb5984987-nz6bj_calico-apiserver(ac06912f-e290-4031-a848-1392298fa9de): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 21:24:21.004986 kubelet[2889]: E0116 21:24:21.003992 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" podUID="ac06912f-e290-4031-a848-1392298fa9de" Jan 16 21:24:21.054000 audit[6303]: CRED_ACQ pid=6303 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:21.148653 kernel: audit: type=1103 audit(1768598661.054:986): pid=6303 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:21.161048 containerd[1592]: time="2026-01-16T21:24:21.161009270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 21:24:21.465851 containerd[1592]: time="2026-01-16T21:24:21.457178027Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:24:21.557129 containerd[1592]: time="2026-01-16T21:24:21.556956014Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 21:24:21.557129 containerd[1592]: time="2026-01-16T21:24:21.557078061Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 16 21:24:21.567634 kubelet[2889]: E0116 21:24:21.559081 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 21:24:21.575250 kubelet[2889]: E0116 21:24:21.573618 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 21:24:21.575250 kubelet[2889]: E0116 21:24:21.573943 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-dgkw9_calico-system(91f2e5a0-0976-4b7a-ac63-530715dff408): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 21:24:21.575250 kubelet[2889]: E0116 21:24:21.573986 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dgkw9" podUID="91f2e5a0-0976-4b7a-ac63-530715dff408" Jan 16 21:24:23.529635 sshd[6303]: Connection closed by 10.0.0.1 port 48688 Jan 16 21:24:23.507180 sshd-session[6299]: pam_unix(sshd:session): session closed for user core Jan 16 21:24:23.599000 audit[6299]: USER_END pid=6299 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:23.640982 systemd[1]: sshd@34-10.0.0.34:22-10.0.0.1:48688.service: Deactivated successfully. Jan 16 21:24:23.652906 systemd[1]: session-36.scope: Deactivated successfully. Jan 16 21:24:23.701629 systemd-logind[1570]: Session 36 logged out. Waiting for processes to exit. Jan 16 21:24:23.599000 audit[6299]: CRED_DISP pid=6299 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:23.773159 systemd-logind[1570]: Removed session 36. Jan 16 21:24:23.970947 kernel: audit: type=1106 audit(1768598663.599:987): pid=6299 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:23.971093 kernel: audit: type=1104 audit(1768598663.599:988): pid=6299 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:23.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.34:22-10.0.0.1:48688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:24.186610 kernel: audit: type=1131 audit(1768598663.634:989): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.34:22-10.0.0.1:48688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:26.238159 kubelet[2889]: E0116 21:24:26.237242 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:24:26.346193 kubelet[2889]: E0116 21:24:26.337022 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" podUID="f80ed623-af1a-45e7-a125-0c7c2229f592" Jan 16 21:24:28.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.34:22-10.0.0.1:52720 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:28.492594 systemd[1]: Started sshd@35-10.0.0.34:22-10.0.0.1:52720.service - OpenSSH per-connection server daemon (10.0.0.1:52720). Jan 16 21:24:28.596937 kernel: audit: type=1130 audit(1768598668.492:990): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.34:22-10.0.0.1:52720 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:29.105934 kubelet[2889]: E0116 21:24:29.084834 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:24:29.493000 audit[6317]: USER_ACCT pid=6317 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:29.530589 systemd-logind[1570]: New session 37 of user core. Jan 16 21:24:29.508625 sshd-session[6317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:24:29.559812 sshd[6317]: Accepted publickey for core from 10.0.0.1 port 52720 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:24:29.719842 kernel: audit: type=1101 audit(1768598669.493:991): pid=6317 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:29.720502 kernel: audit: type=1103 audit(1768598669.500:992): pid=6317 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:29.500000 audit[6317]: CRED_ACQ pid=6317 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:29.725114 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 16 21:24:30.043147 kernel: audit: type=1006 audit(1768598669.501:993): pid=6317 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=37 res=1 Jan 16 21:24:30.043632 kernel: audit: type=1300 audit(1768598669.501:993): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff09f1b910 a2=3 a3=0 items=0 ppid=1 pid=6317 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=37 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:29.501000 audit[6317]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff09f1b910 a2=3 a3=0 items=0 ppid=1 pid=6317 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=37 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:29.501000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:24:30.376884 kernel: audit: type=1327 audit(1768598669.501:993): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:24:30.421139 kernel: audit: type=1105 audit(1768598669.749:994): pid=6317 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:29.749000 audit[6317]: USER_START pid=6317 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:29.761000 audit[6321]: CRED_ACQ pid=6321 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:30.566995 kernel: audit: type=1103 audit(1768598669.761:995): pid=6321 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:31.106146 kubelet[2889]: E0116 21:24:31.098634 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" podUID="7e12a227-9190-436c-a55f-74274779eb32" Jan 16 21:24:31.320814 sshd[6321]: Connection closed by 10.0.0.1 port 52720 Jan 16 21:24:31.345102 sshd-session[6317]: pam_unix(sshd:session): session closed for user core Jan 16 21:24:31.416000 audit[6317]: USER_END pid=6317 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:31.538148 kernel: audit: type=1106 audit(1768598671.416:996): pid=6317 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:31.527659 systemd[1]: sshd@35-10.0.0.34:22-10.0.0.1:52720.service: Deactivated successfully. Jan 16 21:24:31.498000 audit[6317]: CRED_DISP pid=6317 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:31.547670 systemd[1]: session-37.scope: Deactivated successfully. Jan 16 21:24:31.556875 systemd-logind[1570]: Session 37 logged out. Waiting for processes to exit. Jan 16 21:24:31.704947 kernel: audit: type=1104 audit(1768598671.498:997): pid=6317 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:31.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.34:22-10.0.0.1:52720 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:31.739881 systemd-logind[1570]: Removed session 37. Jan 16 21:24:31.888150 kernel: audit: type=1131 audit(1768598671.532:998): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.34:22-10.0.0.1:52720 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:32.111226 kubelet[2889]: E0116 21:24:32.110141 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dgkw9" podUID="91f2e5a0-0976-4b7a-ac63-530715dff408" Jan 16 21:24:32.116929 kubelet[2889]: E0116 21:24:32.116884 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77c5fd8d7d-qxwpz" podUID="a053e2c3-3297-4e1f-bf5a-da2b545cc5db" Jan 16 21:24:33.059868 kubelet[2889]: E0116 21:24:33.058616 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:24:33.073214 kubelet[2889]: E0116 21:24:33.073020 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" podUID="ac06912f-e290-4031-a848-1392298fa9de" Jan 16 21:24:36.369852 systemd[1]: Started sshd@36-10.0.0.34:22-10.0.0.1:51560.service - OpenSSH per-connection server daemon (10.0.0.1:51560). Jan 16 21:24:36.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.34:22-10.0.0.1:51560 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:36.454920 kernel: audit: type=1130 audit(1768598676.368:999): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.34:22-10.0.0.1:51560 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:37.064000 audit[6340]: USER_ACCT pid=6340 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:37.104064 sshd[6340]: Accepted publickey for core from 10.0.0.1 port 51560 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:24:37.130520 kernel: audit: type=1101 audit(1768598677.064:1000): pid=6340 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:37.166000 audit[6340]: CRED_ACQ pid=6340 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:37.201619 systemd-logind[1570]: New session 38 of user core. Jan 16 21:24:37.172165 sshd-session[6340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:24:37.258943 kernel: audit: type=1103 audit(1768598677.166:1001): pid=6340 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:37.329871 kernel: audit: type=1006 audit(1768598677.167:1002): pid=6340 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=38 res=1 Jan 16 21:24:37.167000 audit[6340]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff102655c0 a2=3 a3=0 items=0 ppid=1 pid=6340 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=38 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:37.336842 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 16 21:24:37.418529 kernel: audit: type=1300 audit(1768598677.167:1002): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff102655c0 a2=3 a3=0 items=0 ppid=1 pid=6340 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=38 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:37.167000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:24:37.466555 kernel: audit: type=1327 audit(1768598677.167:1002): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:24:37.570683 kernel: audit: type=1105 audit(1768598677.383:1003): pid=6340 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:37.383000 audit[6340]: USER_START pid=6340 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:37.396000 audit[6369]: CRED_ACQ pid=6369 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:37.665675 kernel: audit: type=1103 audit(1768598677.396:1004): pid=6369 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:38.349530 sshd[6369]: Connection closed by 10.0.0.1 port 51560 Jan 16 21:24:38.355539 sshd-session[6340]: pam_unix(sshd:session): session closed for user core Jan 16 21:24:38.370000 audit[6340]: USER_END pid=6340 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:38.395156 systemd[1]: sshd@36-10.0.0.34:22-10.0.0.1:51560.service: Deactivated successfully. Jan 16 21:24:38.408020 systemd[1]: session-38.scope: Deactivated successfully. Jan 16 21:24:38.415898 systemd-logind[1570]: Session 38 logged out. Waiting for processes to exit. Jan 16 21:24:38.454144 kernel: audit: type=1106 audit(1768598678.370:1005): pid=6340 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:38.429119 systemd[1]: Started sshd@37-10.0.0.34:22-10.0.0.1:51574.service - OpenSSH per-connection server daemon (10.0.0.1:51574). Jan 16 21:24:38.437982 systemd-logind[1570]: Removed session 38. Jan 16 21:24:38.370000 audit[6340]: CRED_DISP pid=6340 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:38.512840 kernel: audit: type=1104 audit(1768598678.370:1006): pid=6340 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:38.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.34:22-10.0.0.1:51560 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:38.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-10.0.0.34:22-10.0.0.1:51574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:39.036000 audit[6384]: USER_ACCT pid=6384 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:39.039512 sshd[6384]: Accepted publickey for core from 10.0.0.1 port 51574 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:24:39.039000 audit[6384]: CRED_ACQ pid=6384 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:39.040000 audit[6384]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcf759a860 a2=3 a3=0 items=0 ppid=1 pid=6384 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=39 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:39.040000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:24:39.051549 sshd-session[6384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:24:39.110245 systemd-logind[1570]: New session 39 of user core. Jan 16 21:24:39.152047 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 16 21:24:39.195000 audit[6384]: USER_START pid=6384 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:39.209000 audit[6388]: CRED_ACQ pid=6388 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:40.096037 kubelet[2889]: E0116 21:24:40.090684 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" podUID="f80ed623-af1a-45e7-a125-0c7c2229f592" Jan 16 21:24:41.099162 kubelet[2889]: E0116 21:24:41.095909 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:24:42.147139 sshd[6388]: Connection closed by 10.0.0.1 port 51574 Jan 16 21:24:42.149211 sshd-session[6384]: pam_unix(sshd:session): session closed for user core Jan 16 21:24:42.150000 audit[6384]: USER_END pid=6384 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:42.171556 kernel: kauditd_printk_skb: 9 callbacks suppressed Jan 16 21:24:42.171633 kernel: audit: type=1106 audit(1768598682.150:1014): pid=6384 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:42.171000 audit[6384]: CRED_DISP pid=6384 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:42.405211 kernel: audit: type=1104 audit(1768598682.171:1015): pid=6384 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:42.333101 systemd[1]: sshd@37-10.0.0.34:22-10.0.0.1:51574.service: Deactivated successfully. Jan 16 21:24:42.339246 systemd[1]: session-39.scope: Deactivated successfully. Jan 16 21:24:42.351887 systemd-logind[1570]: Session 39 logged out. Waiting for processes to exit. Jan 16 21:24:42.407632 systemd[1]: Started sshd@38-10.0.0.34:22-10.0.0.1:51584.service - OpenSSH per-connection server daemon (10.0.0.1:51584). Jan 16 21:24:42.415084 systemd-logind[1570]: Removed session 39. Jan 16 21:24:42.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-10.0.0.34:22-10.0.0.1:51574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:42.473096 kernel: audit: type=1131 audit(1768598682.332:1016): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-10.0.0.34:22-10.0.0.1:51574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:42.473203 kernel: audit: type=1130 audit(1768598682.407:1017): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-10.0.0.34:22-10.0.0.1:51584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:42.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-10.0.0.34:22-10.0.0.1:51584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:43.121494 kubelet[2889]: E0116 21:24:43.118050 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" podUID="7e12a227-9190-436c-a55f-74274779eb32" Jan 16 21:24:43.162000 audit[6400]: USER_ACCT pid=6400 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:43.183886 sshd-session[6400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:24:43.185187 sshd[6400]: Accepted publickey for core from 10.0.0.1 port 51584 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:24:43.204680 systemd-logind[1570]: New session 40 of user core. Jan 16 21:24:43.261065 kernel: audit: type=1101 audit(1768598683.162:1018): pid=6400 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:43.261215 kernel: audit: type=1103 audit(1768598683.179:1019): pid=6400 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:43.179000 audit[6400]: CRED_ACQ pid=6400 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:43.359577 kernel: audit: type=1006 audit(1768598683.179:1020): pid=6400 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=40 res=1 Jan 16 21:24:43.319553 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 16 21:24:43.433979 kernel: audit: type=1300 audit(1768598683.179:1020): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe19ead4e0 a2=3 a3=0 items=0 ppid=1 pid=6400 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=40 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:43.179000 audit[6400]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe19ead4e0 a2=3 a3=0 items=0 ppid=1 pid=6400 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=40 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:43.179000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:24:43.469554 kernel: audit: type=1327 audit(1768598683.179:1020): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:24:43.379000 audit[6400]: USER_START pid=6400 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:43.550848 kernel: audit: type=1105 audit(1768598683.379:1021): pid=6400 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:43.398000 audit[6405]: CRED_ACQ pid=6405 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:44.075852 kubelet[2889]: E0116 21:24:44.073603 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77c5fd8d7d-qxwpz" podUID="a053e2c3-3297-4e1f-bf5a-da2b545cc5db" Jan 16 21:24:46.076091 kubelet[2889]: E0116 21:24:46.074956 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:24:46.099117 kubelet[2889]: E0116 21:24:46.098684 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dgkw9" podUID="91f2e5a0-0976-4b7a-ac63-530715dff408" Jan 16 21:24:46.971000 audit[6420]: NETFILTER_CFG table=filter:142 family=2 entries=26 op=nft_register_rule pid=6420 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:24:46.971000 audit[6420]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffef5a82b60 a2=0 a3=7ffef5a82b4c items=0 ppid=3054 pid=6420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:46.971000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:24:46.999517 sshd[6405]: Connection closed by 10.0.0.1 port 51584 Jan 16 21:24:46.999690 sshd-session[6400]: pam_unix(sshd:session): session closed for user core Jan 16 21:24:47.007000 audit[6420]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=6420 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:24:47.007000 audit[6420]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffef5a82b60 a2=0 a3=0 items=0 ppid=3054 pid=6420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:47.007000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:24:47.051000 audit[6400]: USER_END pid=6400 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:47.059000 audit[6400]: CRED_DISP pid=6400 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:47.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-10.0.0.34:22-10.0.0.1:41178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:47.081590 systemd[1]: Started sshd@39-10.0.0.34:22-10.0.0.1:41178.service - OpenSSH per-connection server daemon (10.0.0.1:41178). Jan 16 21:24:47.137188 systemd[1]: sshd@38-10.0.0.34:22-10.0.0.1:51584.service: Deactivated successfully. Jan 16 21:24:47.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-10.0.0.34:22-10.0.0.1:51584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:47.145142 systemd[1]: session-40.scope: Deactivated successfully. Jan 16 21:24:47.151518 systemd[1]: session-40.scope: Consumed 1.490s CPU time, 48.2M memory peak. Jan 16 21:24:47.158640 systemd-logind[1570]: Session 40 logged out. Waiting for processes to exit. Jan 16 21:24:47.172664 systemd-logind[1570]: Removed session 40. Jan 16 21:24:47.615667 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 16 21:24:47.615954 kernel: audit: type=1101 audit(1768598687.570:1029): pid=6422 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:47.570000 audit[6422]: USER_ACCT pid=6422 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:47.578148 sshd-session[6422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:24:47.617047 sshd[6422]: Accepted publickey for core from 10.0.0.1 port 41178 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:24:47.574000 audit[6422]: CRED_ACQ pid=6422 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:47.679030 systemd-logind[1570]: New session 41 of user core. Jan 16 21:24:47.756000 kernel: audit: type=1103 audit(1768598687.574:1030): pid=6422 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:47.756134 kernel: audit: type=1006 audit(1768598687.574:1031): pid=6422 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=41 res=1 Jan 16 21:24:47.574000 audit[6422]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc61ec7220 a2=3 a3=0 items=0 ppid=1 pid=6422 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=41 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:47.818577 kernel: audit: type=1300 audit(1768598687.574:1031): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc61ec7220 a2=3 a3=0 items=0 ppid=1 pid=6422 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=41 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:47.574000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:24:47.820994 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 16 21:24:47.846997 kernel: audit: type=1327 audit(1768598687.574:1031): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:24:47.838000 audit[6422]: USER_START pid=6422 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:47.918102 kernel: audit: type=1105 audit(1768598687.838:1032): pid=6422 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:47.918219 kernel: audit: type=1103 audit(1768598687.909:1033): pid=6429 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:47.909000 audit[6429]: CRED_ACQ pid=6429 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:48.102629 kubelet[2889]: E0116 21:24:48.100171 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" podUID="ac06912f-e290-4031-a848-1392298fa9de" Jan 16 21:24:48.399000 audit[6437]: NETFILTER_CFG table=filter:144 family=2 entries=38 op=nft_register_rule pid=6437 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:24:48.448059 kernel: audit: type=1325 audit(1768598688.399:1034): table=filter:144 family=2 entries=38 op=nft_register_rule pid=6437 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:24:48.399000 audit[6437]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffcb6db14b0 a2=0 a3=7ffcb6db149c items=0 ppid=3054 pid=6437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:48.536990 kernel: audit: type=1300 audit(1768598688.399:1034): arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffcb6db14b0 a2=0 a3=7ffcb6db149c items=0 ppid=3054 pid=6437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:48.399000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:24:48.539000 audit[6437]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=6437 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:24:48.539000 audit[6437]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffcb6db14b0 a2=0 a3=0 items=0 ppid=3054 pid=6437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:48.584617 kernel: audit: type=1327 audit(1768598688.399:1034): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:24:48.539000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:24:49.317866 sshd[6429]: Connection closed by 10.0.0.1 port 41178 Jan 16 21:24:49.317851 sshd-session[6422]: pam_unix(sshd:session): session closed for user core Jan 16 21:24:49.326000 audit[6422]: USER_END pid=6422 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:49.327000 audit[6422]: CRED_DISP pid=6422 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:49.351137 systemd[1]: sshd@39-10.0.0.34:22-10.0.0.1:41178.service: Deactivated successfully. Jan 16 21:24:49.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-10.0.0.34:22-10.0.0.1:41178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:49.362213 systemd[1]: session-41.scope: Deactivated successfully. Jan 16 21:24:49.367691 systemd-logind[1570]: Session 41 logged out. Waiting for processes to exit. Jan 16 21:24:49.377049 systemd-logind[1570]: Removed session 41. Jan 16 21:24:49.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-10.0.0.34:22-10.0.0.1:41188 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:49.381013 systemd[1]: Started sshd@40-10.0.0.34:22-10.0.0.1:41188.service - OpenSSH per-connection server daemon (10.0.0.1:41188). Jan 16 21:24:49.693000 audit[6442]: USER_ACCT pid=6442 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:49.699067 sshd[6442]: Accepted publickey for core from 10.0.0.1 port 41188 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:24:49.701000 audit[6442]: CRED_ACQ pid=6442 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:49.702000 audit[6442]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff5cb67970 a2=3 a3=0 items=0 ppid=1 pid=6442 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=42 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:49.702000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:24:49.712246 sshd-session[6442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:24:49.748248 systemd-logind[1570]: New session 42 of user core. Jan 16 21:24:49.771246 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 16 21:24:49.786000 audit[6442]: USER_START pid=6442 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:49.805000 audit[6446]: CRED_ACQ pid=6446 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:50.060611 kubelet[2889]: E0116 21:24:50.060186 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:24:50.385634 sshd[6446]: Connection closed by 10.0.0.1 port 41188 Jan 16 21:24:50.385202 sshd-session[6442]: pam_unix(sshd:session): session closed for user core Jan 16 21:24:50.394000 audit[6442]: USER_END pid=6442 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:50.394000 audit[6442]: CRED_DISP pid=6442 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:50.405980 systemd[1]: sshd@40-10.0.0.34:22-10.0.0.1:41188.service: Deactivated successfully. Jan 16 21:24:50.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-10.0.0.34:22-10.0.0.1:41188 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:50.419196 systemd[1]: session-42.scope: Deactivated successfully. Jan 16 21:24:50.429069 systemd-logind[1570]: Session 42 logged out. Waiting for processes to exit. Jan 16 21:24:50.432659 systemd-logind[1570]: Removed session 42. Jan 16 21:24:53.068143 kubelet[2889]: E0116 21:24:53.062136 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:24:54.065248 kubelet[2889]: E0116 21:24:54.064049 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" podUID="f80ed623-af1a-45e7-a125-0c7c2229f592" Jan 16 21:24:54.412982 containerd[1592]: time="2026-01-16T21:24:54.410921326Z" level=info msg="container event discarded" container=d8ddae9646ee33d3b3f2304300d16d1832ddd2cdb92ca5f5895458d97319b901 type=CONTAINER_CREATED_EVENT Jan 16 21:24:54.412982 containerd[1592]: time="2026-01-16T21:24:54.411135085Z" level=info msg="container event discarded" container=d8ddae9646ee33d3b3f2304300d16d1832ddd2cdb92ca5f5895458d97319b901 type=CONTAINER_STARTED_EVENT Jan 16 21:24:54.834027 containerd[1592]: time="2026-01-16T21:24:54.832889549Z" level=info msg="container event discarded" container=93dfabfdf599f3d361287516af3964d525dc81d71338d6dfc4d20254c4f10092 type=CONTAINER_CREATED_EVENT Jan 16 21:24:54.834027 containerd[1592]: time="2026-01-16T21:24:54.833075656Z" level=info msg="container event discarded" container=93dfabfdf599f3d361287516af3964d525dc81d71338d6dfc4d20254c4f10092 type=CONTAINER_STARTED_EVENT Jan 16 21:24:55.453698 systemd[1]: Started sshd@41-10.0.0.34:22-10.0.0.1:51814.service - OpenSSH per-connection server daemon (10.0.0.1:51814). Jan 16 21:24:55.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-10.0.0.34:22-10.0.0.1:51814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:55.468087 kernel: kauditd_printk_skb: 17 callbacks suppressed Jan 16 21:24:55.468147 kernel: audit: type=1130 audit(1768598695.452:1048): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-10.0.0.34:22-10.0.0.1:51814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:55.866877 sshd[6459]: Accepted publickey for core from 10.0.0.1 port 51814 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:24:55.864000 audit[6459]: USER_ACCT pid=6459 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:55.880884 sshd-session[6459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:24:55.933038 kernel: audit: type=1101 audit(1768598695.864:1049): pid=6459 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:55.934123 systemd-logind[1570]: New session 43 of user core. Jan 16 21:24:55.871000 audit[6459]: CRED_ACQ pid=6459 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:55.950571 systemd[1]: Started session-43.scope - Session 43 of User core. Jan 16 21:24:56.007907 kernel: audit: type=1103 audit(1768598695.871:1050): pid=6459 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:56.007994 kernel: audit: type=1006 audit(1768598695.871:1051): pid=6459 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=43 res=1 Jan 16 21:24:55.871000 audit[6459]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdde45f610 a2=3 a3=0 items=0 ppid=1 pid=6459 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=43 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:56.118998 kubelet[2889]: E0116 21:24:56.117218 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:24:56.133600 kernel: audit: type=1300 audit(1768598695.871:1051): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdde45f610 a2=3 a3=0 items=0 ppid=1 pid=6459 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=43 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:56.178676 kernel: audit: type=1327 audit(1768598695.871:1051): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:24:55.871000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:24:56.007000 audit[6459]: USER_START pid=6459 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:56.354859 kernel: audit: type=1105 audit(1768598696.007:1052): pid=6459 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:56.355007 kernel: audit: type=1103 audit(1768598696.019:1053): pid=6463 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:56.019000 audit[6463]: CRED_ACQ pid=6463 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:56.937074 sshd[6463]: Connection closed by 10.0.0.1 port 51814 Jan 16 21:24:56.932566 sshd-session[6459]: pam_unix(sshd:session): session closed for user core Jan 16 21:24:56.952000 audit[6459]: USER_END pid=6459 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:56.963113 systemd[1]: sshd@41-10.0.0.34:22-10.0.0.1:51814.service: Deactivated successfully. Jan 16 21:24:56.965900 systemd-logind[1570]: Session 43 logged out. Waiting for processes to exit. Jan 16 21:24:56.973710 systemd[1]: session-43.scope: Deactivated successfully. Jan 16 21:24:57.020547 systemd-logind[1570]: Removed session 43. Jan 16 21:24:56.952000 audit[6459]: CRED_DISP pid=6459 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:57.128188 kubelet[2889]: E0116 21:24:57.120955 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77c5fd8d7d-qxwpz" podUID="a053e2c3-3297-4e1f-bf5a-da2b545cc5db" Jan 16 21:24:57.128188 kubelet[2889]: E0116 21:24:57.121590 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" podUID="7e12a227-9190-436c-a55f-74274779eb32" Jan 16 21:24:57.130586 kernel: audit: type=1106 audit(1768598696.952:1054): pid=6459 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:57.130633 kernel: audit: type=1104 audit(1768598696.952:1055): pid=6459 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:56.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-10.0.0.34:22-10.0.0.1:51814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:59.108447 kubelet[2889]: E0116 21:24:59.107106 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dgkw9" podUID="91f2e5a0-0976-4b7a-ac63-530715dff408" Jan 16 21:24:59.113142 kubelet[2889]: E0116 21:24:59.111976 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" podUID="ac06912f-e290-4031-a848-1392298fa9de" Jan 16 21:25:00.056124 kubelet[2889]: E0116 21:25:00.056072 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:25:01.119620 containerd[1592]: time="2026-01-16T21:25:01.118916297Z" level=info msg="container event discarded" container=938f0aa2de46295f5bb9c4271a0787f600337b67bf976fefab68c9758a26b40a type=CONTAINER_CREATED_EVENT Jan 16 21:25:01.866664 containerd[1592]: time="2026-01-16T21:25:01.866595165Z" level=info msg="container event discarded" container=938f0aa2de46295f5bb9c4271a0787f600337b67bf976fefab68c9758a26b40a type=CONTAINER_STARTED_EVENT Jan 16 21:25:02.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@42-10.0.0.34:22-10.0.0.1:51828 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:25:02.002129 systemd[1]: Started sshd@42-10.0.0.34:22-10.0.0.1:51828.service - OpenSSH per-connection server daemon (10.0.0.1:51828). Jan 16 21:25:02.073589 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:25:02.073728 kernel: audit: type=1130 audit(1768598702.001:1057): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@42-10.0.0.34:22-10.0.0.1:51828 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:25:02.429601 containerd[1592]: time="2026-01-16T21:25:02.429530793Z" level=info msg="container event discarded" container=7996387bf9d54c3938377ff06457b3afb0b655e9eafbb626d1a02bd3dc075688 type=CONTAINER_CREATED_EVENT Jan 16 21:25:02.930000 audit[6478]: USER_ACCT pid=6478 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:25:02.969218 sshd-session[6478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:25:02.985230 sshd[6478]: Accepted publickey for core from 10.0.0.1 port 51828 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:25:03.045702 systemd-logind[1570]: New session 44 of user core. Jan 16 21:25:03.116555 containerd[1592]: time="2026-01-16T21:25:03.114745231Z" level=info msg="container event discarded" container=7996387bf9d54c3938377ff06457b3afb0b655e9eafbb626d1a02bd3dc075688 type=CONTAINER_STARTED_EVENT Jan 16 21:25:03.157237 kernel: audit: type=1101 audit(1768598702.930:1058): pid=6478 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:25:02.955000 audit[6478]: CRED_ACQ pid=6478 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:25:03.239977 kernel: audit: type=1103 audit(1768598702.955:1059): pid=6478 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:25:03.240250 kernel: audit: type=1006 audit(1768598702.955:1060): pid=6478 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=44 res=1 Jan 16 21:25:02.955000 audit[6478]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe5df6ff70 a2=3 a3=0 items=0 ppid=1 pid=6478 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=44 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:25:03.401645 kernel: audit: type=1300 audit(1768598702.955:1060): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe5df6ff70 a2=3 a3=0 items=0 ppid=1 pid=6478 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=44 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:25:02.955000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:25:03.409556 systemd[1]: Started session-44.scope - Session 44 of User core. Jan 16 21:25:03.490528 kernel: audit: type=1327 audit(1768598702.955:1060): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:25:03.497000 audit[6478]: USER_START pid=6478 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:25:03.607626 kernel: audit: type=1105 audit(1768598703.497:1061): pid=6478 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:25:03.599000 audit[6482]: CRED_ACQ pid=6482 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:25:03.721142 kernel: audit: type=1103 audit(1768598703.599:1062): pid=6482 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:25:03.731244 containerd[1592]: time="2026-01-16T21:25:03.730229897Z" level=info msg="container event discarded" container=7996387bf9d54c3938377ff06457b3afb0b655e9eafbb626d1a02bd3dc075688 type=CONTAINER_STOPPED_EVENT Jan 16 21:25:04.981601 sshd[6482]: Connection closed by 10.0.0.1 port 51828 Jan 16 21:25:04.985000 audit[6478]: USER_END pid=6478 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:25:04.983590 sshd-session[6478]: pam_unix(sshd:session): session closed for user core Jan 16 21:25:05.094647 kubelet[2889]: E0116 21:25:05.073114 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" podUID="f80ed623-af1a-45e7-a125-0c7c2229f592" Jan 16 21:25:05.109223 kernel: audit: type=1106 audit(1768598704.985:1063): pid=6478 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:25:05.109679 kernel: audit: type=1104 audit(1768598704.985:1064): pid=6478 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:25:04.985000 audit[6478]: CRED_DISP pid=6478 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:25:05.117095 systemd[1]: sshd@42-10.0.0.34:22-10.0.0.1:51828.service: Deactivated successfully. Jan 16 21:25:05.126157 systemd[1]: session-44.scope: Deactivated successfully. Jan 16 21:25:05.135110 systemd-logind[1570]: Session 44 logged out. Waiting for processes to exit. Jan 16 21:25:05.141996 systemd-logind[1570]: Removed session 44. Jan 16 21:25:05.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@42-10.0.0.34:22-10.0.0.1:51828 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:25:06.088226 kubelet[2889]: E0116 21:25:06.087191 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:25:08.156442 kubelet[2889]: E0116 21:25:08.152964 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:25:09.059541 kubelet[2889]: E0116 21:25:09.057204 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" podUID="7e12a227-9190-436c-a55f-74274779eb32" Jan 16 21:25:10.049239 systemd[1]: Started sshd@43-10.0.0.34:22-10.0.0.1:52952.service - OpenSSH per-connection server daemon (10.0.0.1:52952). Jan 16 21:25:10.072054 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:25:10.072197 kernel: audit: type=1130 audit(1768598710.049:1066): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@43-10.0.0.34:22-10.0.0.1:52952 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:25:10.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@43-10.0.0.34:22-10.0.0.1:52952 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:25:10.089570 kubelet[2889]: E0116 21:25:10.086700 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dgkw9" podUID="91f2e5a0-0976-4b7a-ac63-530715dff408" Jan 16 21:25:10.093995 kubelet[2889]: E0116 21:25:10.093803 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" podUID="ac06912f-e290-4031-a848-1392298fa9de" Jan 16 21:25:10.706654 sshd[6521]: Accepted publickey for core from 10.0.0.1 port 52952 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:25:10.703000 audit[6521]: USER_ACCT pid=6521 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:25:10.714985 sshd-session[6521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:25:10.760007 systemd-logind[1570]: New session 45 of user core. Jan 16 21:25:10.710000 audit[6521]: CRED_ACQ pid=6521 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:25:10.832516 kernel: audit: type=1101 audit(1768598710.703:1067): pid=6521 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:25:10.832625 kernel: audit: type=1103 audit(1768598710.710:1068): pid=6521 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:25:10.874773 kernel: audit: type=1006 audit(1768598710.710:1069): pid=6521 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=45 res=1 Jan 16 21:25:10.710000 audit[6521]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffaada3fe0 a2=3 a3=0 items=0 ppid=1 pid=6521 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=45 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:25:10.915089 systemd[1]: Started session-45.scope - Session 45 of User core. Jan 16 21:25:10.949693 kernel: audit: type=1300 audit(1768598710.710:1069): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffaada3fe0 a2=3 a3=0 items=0 ppid=1 pid=6521 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=45 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:25:10.710000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:25:10.988048 kernel: audit: type=1327 audit(1768598710.710:1069): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:25:10.931000 audit[6521]: USER_START pid=6521 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:25:11.067797 kernel: audit: type=1105 audit(1768598710.931:1070): pid=6521 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:25:11.071732 kernel: audit: type=1103 audit(1768598710.942:1071): pid=6525 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:25:10.942000 audit[6525]: CRED_ACQ pid=6525 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:25:11.072025 kubelet[2889]: E0116 21:25:11.069969 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77c5fd8d7d-qxwpz" podUID="a053e2c3-3297-4e1f-bf5a-da2b545cc5db" Jan 16 21:25:16.115307 containerd[1592]: time="2026-01-16T21:25:16.114717018Z" level=info msg="container event discarded" container=04b624b2d7b4c8475296bb2f4bcf15051f284926173db5c8812ae46071a62179 type=CONTAINER_CREATED_EVENT Jan 16 21:25:16.922761 containerd[1592]: time="2026-01-16T21:25:16.922691903Z" level=info msg="container event discarded" container=04b624b2d7b4c8475296bb2f4bcf15051f284926173db5c8812ae46071a62179 type=CONTAINER_STARTED_EVENT Jan 16 21:25:17.057686 kubelet[2889]: E0116 21:25:17.057605 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" podUID="f80ed623-af1a-45e7-a125-0c7c2229f592" Jan 16 21:25:21.059082 kubelet[2889]: E0116 21:25:21.058798 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dgkw9" podUID="91f2e5a0-0976-4b7a-ac63-530715dff408" Jan 16 21:25:21.067493 kubelet[2889]: E0116 21:25:21.067170 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:25:22.063695 kubelet[2889]: E0116 21:25:22.062722 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" podUID="7e12a227-9190-436c-a55f-74274779eb32" Jan 16 21:25:24.078679 kubelet[2889]: E0116 21:25:24.078493 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77c5fd8d7d-qxwpz" podUID="a053e2c3-3297-4e1f-bf5a-da2b545cc5db" Jan 16 21:25:24.305466 containerd[1592]: time="2026-01-16T21:25:24.303695858Z" level=info msg="container event discarded" container=04b624b2d7b4c8475296bb2f4bcf15051f284926173db5c8812ae46071a62179 type=CONTAINER_STOPPED_EVENT Jan 16 21:25:25.057779 kubelet[2889]: E0116 21:25:25.057719 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" podUID="ac06912f-e290-4031-a848-1392298fa9de" Jan 16 21:25:28.060084 kubelet[2889]: E0116 21:25:28.059675 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" podUID="f80ed623-af1a-45e7-a125-0c7c2229f592" Jan 16 21:25:28.303003 kubelet[2889]: E0116 21:25:28.302605 2889 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 16 21:25:33.055601 kubelet[2889]: E0116 21:25:33.055226 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:25:33.062695 kubelet[2889]: E0116 21:25:33.062151 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:25:36.068744 kubelet[2889]: E0116 21:25:36.068190 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" podUID="ac06912f-e290-4031-a848-1392298fa9de" Jan 16 21:25:36.068744 kubelet[2889]: E0116 21:25:36.068691 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" podUID="7e12a227-9190-436c-a55f-74274779eb32" Jan 16 21:25:36.072186 kubelet[2889]: E0116 21:25:36.069536 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dgkw9" podUID="91f2e5a0-0976-4b7a-ac63-530715dff408" Jan 16 21:25:37.070466 kubelet[2889]: E0116 21:25:37.069812 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77c5fd8d7d-qxwpz" podUID="a053e2c3-3297-4e1f-bf5a-da2b545cc5db" Jan 16 21:25:38.304074 kubelet[2889]: E0116 21:25:38.303127 2889 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 16 21:25:41.062586 kubelet[2889]: E0116 21:25:41.062221 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" podUID="f80ed623-af1a-45e7-a125-0c7c2229f592" Jan 16 21:25:44.111717 kubelet[2889]: E0116 21:25:44.104067 2889 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{goldmane-7c778bb748-dgkw9.188b52f17c0c9b08 calico-system 2229 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:goldmane-7c778bb748-dgkw9,UID:91f2e5a0-0976-4b7a-ac63-530715dff408,APIVersion:v1,ResourceVersion:985,FieldPath:spec.containers{goldmane},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\",Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-16 21:21:15 +0000 UTC,LastTimestamp:2026-01-16 21:25:10.086639647 +0000 UTC m=+374.644942608,Count:16,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 16 21:25:46.076864 kubelet[2889]: E0116 21:25:46.076825 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:25:46.811667 containerd[1592]: time="2026-01-16T21:25:46.811573038Z" level=error msg="ExecSync for \"4e7aa1694340ec2779801715598738b2ddbefc9f931b085401cce2bf040b8866\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Jan 16 21:25:46.815047 kubelet[2889]: E0116 21:25:46.814813 2889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="4e7aa1694340ec2779801715598738b2ddbefc9f931b085401cce2bf040b8866" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Jan 16 21:25:47.066190 kubelet[2889]: E0116 21:25:47.066070 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dgkw9" podUID="91f2e5a0-0976-4b7a-ac63-530715dff408" Jan 16 21:25:48.306533 kubelet[2889]: E0116 21:25:48.305664 2889 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 16 21:25:49.060209 kubelet[2889]: E0116 21:25:49.058811 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:25:49.060209 kubelet[2889]: E0116 21:25:49.059177 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" podUID="7e12a227-9190-436c-a55f-74274779eb32" Jan 16 21:25:51.061641 kubelet[2889]: E0116 21:25:51.060841 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" podUID="ac06912f-e290-4031-a848-1392298fa9de" Jan 16 21:25:53.069698 kubelet[2889]: E0116 21:25:53.068742 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" podUID="f80ed623-af1a-45e7-a125-0c7c2229f592" Jan 16 21:25:53.074599 kubelet[2889]: E0116 21:25:53.073691 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77c5fd8d7d-qxwpz" podUID="a053e2c3-3297-4e1f-bf5a-da2b545cc5db" Jan 16 21:25:55.621097 kubelet[2889]: E0116 21:25:55.620849 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:25:58.308702 kubelet[2889]: E0116 21:25:58.307842 2889 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 16 21:25:59.062246 kubelet[2889]: E0116 21:25:59.060229 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dgkw9" podUID="91f2e5a0-0976-4b7a-ac63-530715dff408" Jan 16 21:25:59.078070 kubelet[2889]: E0116 21:25:59.077192 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:26:00.057578 kubelet[2889]: E0116 21:26:00.056659 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:26:00.057578 kubelet[2889]: E0116 21:26:00.057216 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:26:02.060628 kubelet[2889]: E0116 21:26:02.059783 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:26:03.060770 kubelet[2889]: E0116 21:26:03.060698 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" podUID="7e12a227-9190-436c-a55f-74274779eb32" Jan 16 21:26:04.523491 containerd[1592]: time="2026-01-16T21:26:04.522601703Z" level=info msg="container event discarded" container=4e7aa1694340ec2779801715598738b2ddbefc9f931b085401cce2bf040b8866 type=CONTAINER_CREATED_EVENT Jan 16 21:26:05.358131 containerd[1592]: time="2026-01-16T21:26:05.356825919Z" level=info msg="container event discarded" container=4e7aa1694340ec2779801715598738b2ddbefc9f931b085401cce2bf040b8866 type=CONTAINER_STARTED_EVENT Jan 16 21:26:06.074917 kubelet[2889]: E0116 21:26:06.074860 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" podUID="ac06912f-e290-4031-a848-1392298fa9de" Jan 16 21:26:06.088233 kubelet[2889]: E0116 21:26:06.087907 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" podUID="f80ed623-af1a-45e7-a125-0c7c2229f592" Jan 16 21:26:06.092226 kubelet[2889]: E0116 21:26:06.090664 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77c5fd8d7d-qxwpz" podUID="a053e2c3-3297-4e1f-bf5a-da2b545cc5db" Jan 16 21:26:06.645850 systemd[1]: cri-containerd-b6756513308ec6e6c6496d420bff733fcbe83aebf983d8249a7e97a2f184bfc2.scope: Deactivated successfully. Jan 16 21:26:06.648151 systemd[1]: cri-containerd-b6756513308ec6e6c6496d420bff733fcbe83aebf983d8249a7e97a2f184bfc2.scope: Consumed 43.302s CPU time, 81.4M memory peak, 16.4M read from disk. Jan 16 21:26:06.653000 audit: BPF prog-id=103 op=UNLOAD Jan 16 21:26:06.693700 kernel: audit: type=1334 audit(1768598766.653:1072): prog-id=103 op=UNLOAD Jan 16 21:26:06.693918 containerd[1592]: time="2026-01-16T21:26:06.693878432Z" level=info msg="received container exit event container_id:\"b6756513308ec6e6c6496d420bff733fcbe83aebf983d8249a7e97a2f184bfc2\" id:\"b6756513308ec6e6c6496d420bff733fcbe83aebf983d8249a7e97a2f184bfc2\" pid:2732 exit_status:1 exited_at:{seconds:1768598766 nanos:681809125}" Jan 16 21:26:06.757237 kernel: audit: type=1334 audit(1768598766.653:1073): prog-id=107 op=UNLOAD Jan 16 21:26:06.757615 kernel: audit: type=1334 audit(1768598766.654:1074): prog-id=256 op=LOAD Jan 16 21:26:06.653000 audit: BPF prog-id=107 op=UNLOAD Jan 16 21:26:06.654000 audit: BPF prog-id=256 op=LOAD Jan 16 21:26:06.757784 kubelet[2889]: E0116 21:26:06.716906 2889 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Jan 16 21:26:06.757784 kubelet[2889]: I0116 21:26:06.717095 2889 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 16 21:26:06.654000 audit: BPF prog-id=88 op=UNLOAD Jan 16 21:26:06.949692 kernel: audit: type=1334 audit(1768598766.654:1075): prog-id=88 op=UNLOAD Jan 16 21:26:07.170424 kubelet[2889]: E0116 21:26:07.079696 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:26:07.400825 sshd[6525]: Connection closed by 10.0.0.1 port 52952 Jan 16 21:26:07.401627 sshd-session[6521]: pam_unix(sshd:session): session closed for user core Jan 16 21:26:07.412000 audit[6521]: USER_END pid=6521 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:07.434087 systemd[1]: sshd@43-10.0.0.34:22-10.0.0.1:52952.service: Deactivated successfully. Jan 16 21:26:07.451516 systemd[1]: session-45.scope: Deactivated successfully. Jan 16 21:26:07.469586 systemd-logind[1570]: Session 45 logged out. Waiting for processes to exit. Jan 16 21:26:07.476805 systemd-logind[1570]: Removed session 45. Jan 16 21:26:07.579865 kernel: audit: type=1106 audit(1768598767.412:1076): pid=6521 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:07.587091 kernel: audit: type=1104 audit(1768598767.412:1077): pid=6521 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:07.412000 audit[6521]: CRED_DISP pid=6521 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:07.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@43-10.0.0.34:22-10.0.0.1:52952 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:26:07.979554 kernel: audit: type=1131 audit(1768598767.434:1078): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@43-10.0.0.34:22-10.0.0.1:52952 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:26:07.980194 systemd[1]: cri-containerd-8e0c462898bed64b529b04f278dd2a035170639128a1c8a73460f6aaba227e9f.scope: Deactivated successfully. Jan 16 21:26:07.995768 systemd[1]: cri-containerd-8e0c462898bed64b529b04f278dd2a035170639128a1c8a73460f6aaba227e9f.scope: Consumed 45.658s CPU time, 108.8M memory peak, 12.8M read from disk. Jan 16 21:26:08.013757 systemd[1]: cri-containerd-46d9793cd514cdcb451e24bf4f8f6c415a73afbeff686a053da07efe5dda71c0.scope: Deactivated successfully. Jan 16 21:26:08.024878 systemd[1]: cri-containerd-46d9793cd514cdcb451e24bf4f8f6c415a73afbeff686a053da07efe5dda71c0.scope: Consumed 16.306s CPU time, 35.1M memory peak, 11M read from disk. Jan 16 21:26:08.032000 audit: BPF prog-id=257 op=LOAD Jan 16 21:26:08.105847 kernel: audit: type=1334 audit(1768598768.032:1079): prog-id=257 op=LOAD Jan 16 21:26:08.032000 audit: BPF prog-id=83 op=UNLOAD Jan 16 21:26:08.164840 kernel: audit: type=1334 audit(1768598768.032:1080): prog-id=83 op=UNLOAD Jan 16 21:26:08.165163 kernel: audit: type=1334 audit(1768598768.051:1081): prog-id=146 op=UNLOAD Jan 16 21:26:08.051000 audit: BPF prog-id=146 op=UNLOAD Jan 16 21:26:08.165601 kubelet[2889]: E0116 21:26:08.140757 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:26:08.051000 audit: BPF prog-id=150 op=UNLOAD Jan 16 21:26:08.067000 audit: BPF prog-id=98 op=UNLOAD Jan 16 21:26:08.067000 audit: BPF prog-id=102 op=UNLOAD Jan 16 21:26:08.282719 containerd[1592]: time="2026-01-16T21:26:08.282233517Z" level=info msg="received container exit event container_id:\"8e0c462898bed64b529b04f278dd2a035170639128a1c8a73460f6aaba227e9f\" id:\"8e0c462898bed64b529b04f278dd2a035170639128a1c8a73460f6aaba227e9f\" pid:3229 exit_status:1 exited_at:{seconds:1768598768 nanos:274232497}" Jan 16 21:26:08.468835 containerd[1592]: time="2026-01-16T21:26:08.436614220Z" level=info msg="received container exit event container_id:\"46d9793cd514cdcb451e24bf4f8f6c415a73afbeff686a053da07efe5dda71c0\" id:\"46d9793cd514cdcb451e24bf4f8f6c415a73afbeff686a053da07efe5dda71c0\" pid:2718 exit_status:1 exited_at:{seconds:1768598768 nanos:433803428}" Jan 16 21:26:08.758738 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6756513308ec6e6c6496d420bff733fcbe83aebf983d8249a7e97a2f184bfc2-rootfs.mount: Deactivated successfully. Jan 16 21:26:09.500641 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e0c462898bed64b529b04f278dd2a035170639128a1c8a73460f6aaba227e9f-rootfs.mount: Deactivated successfully. Jan 16 21:26:09.619197 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46d9793cd514cdcb451e24bf4f8f6c415a73afbeff686a053da07efe5dda71c0-rootfs.mount: Deactivated successfully. Jan 16 21:26:10.007906 kubelet[2889]: I0116 21:26:10.007864 2889 scope.go:117] "RemoveContainer" containerID="8e0c462898bed64b529b04f278dd2a035170639128a1c8a73460f6aaba227e9f" Jan 16 21:26:10.066725 kubelet[2889]: E0116 21:26:10.063908 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:26:10.071789 kubelet[2889]: I0116 21:26:10.071129 2889 scope.go:117] "RemoveContainer" containerID="46d9793cd514cdcb451e24bf4f8f6c415a73afbeff686a053da07efe5dda71c0" Jan 16 21:26:10.077919 kubelet[2889]: I0116 21:26:10.075082 2889 scope.go:117] "RemoveContainer" containerID="b6756513308ec6e6c6496d420bff733fcbe83aebf983d8249a7e97a2f184bfc2" Jan 16 21:26:10.077919 kubelet[2889]: E0116 21:26:10.075160 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:26:10.077919 kubelet[2889]: E0116 21:26:10.076457 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:26:10.163549 containerd[1592]: time="2026-01-16T21:26:10.163245426Z" level=info msg="CreateContainer within sandbox \"2fabdf91645dbbc66eaf02214742d15b132773e267f7e46ccc205741a4f6e8b8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 16 21:26:10.181374 containerd[1592]: time="2026-01-16T21:26:10.165870087Z" level=info msg="CreateContainer within sandbox \"8fcb1fb81dd9d07259aeb8924064bcb7cc3c4e10a7afc68f65a6c19e9f573024\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 16 21:26:10.181374 containerd[1592]: time="2026-01-16T21:26:10.166108492Z" level=info msg="CreateContainer within sandbox \"af23b6414fffa27b1d181bc5ab66148a7ad6cec6e057b45661e0dee4f546dc4d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 16 21:26:10.466166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3015755512.mount: Deactivated successfully. Jan 16 21:26:10.511675 containerd[1592]: time="2026-01-16T21:26:10.511634214Z" level=info msg="Container ea80aa85f752ebdcca78ccab92c19f59fad037a6707ebc044fe9c951a63a04f7: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:26:10.536600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3882811736.mount: Deactivated successfully. Jan 16 21:26:10.565549 containerd[1592]: time="2026-01-16T21:26:10.557182432Z" level=info msg="Container 1b222381692ff8db4213f7ebddc84c10f5b61d3eb1a359a6c02f696320401b7c: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:26:10.622766 containerd[1592]: time="2026-01-16T21:26:10.622707831Z" level=info msg="Container b4d3b2a828b4de012ba14bdfa8083f5a7044f1cdc2b5ae831a5bd545c53bfd59: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:26:10.706148 containerd[1592]: time="2026-01-16T21:26:10.704919912Z" level=info msg="CreateContainer within sandbox \"2fabdf91645dbbc66eaf02214742d15b132773e267f7e46ccc205741a4f6e8b8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"ea80aa85f752ebdcca78ccab92c19f59fad037a6707ebc044fe9c951a63a04f7\"" Jan 16 21:26:10.707495 containerd[1592]: time="2026-01-16T21:26:10.707197791Z" level=info msg="StartContainer for \"ea80aa85f752ebdcca78ccab92c19f59fad037a6707ebc044fe9c951a63a04f7\"" Jan 16 21:26:10.710204 containerd[1592]: time="2026-01-16T21:26:10.710176505Z" level=info msg="connecting to shim ea80aa85f752ebdcca78ccab92c19f59fad037a6707ebc044fe9c951a63a04f7" address="unix:///run/containerd/s/dc5ab1dcaf0bf9c371d453b39963cb33ed53d1977d8f13a2d7ecbc9c04a2edca" protocol=ttrpc version=3 Jan 16 21:26:10.785648 containerd[1592]: time="2026-01-16T21:26:10.751747065Z" level=info msg="CreateContainer within sandbox \"af23b6414fffa27b1d181bc5ab66148a7ad6cec6e057b45661e0dee4f546dc4d\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"1b222381692ff8db4213f7ebddc84c10f5b61d3eb1a359a6c02f696320401b7c\"" Jan 16 21:26:10.798532 containerd[1592]: time="2026-01-16T21:26:10.798493653Z" level=info msg="StartContainer for \"1b222381692ff8db4213f7ebddc84c10f5b61d3eb1a359a6c02f696320401b7c\"" Jan 16 21:26:10.804781 containerd[1592]: time="2026-01-16T21:26:10.804749971Z" level=info msg="connecting to shim 1b222381692ff8db4213f7ebddc84c10f5b61d3eb1a359a6c02f696320401b7c" address="unix:///run/containerd/s/ad8ca06b5cd9b10e7a2a775f5f9c5ec276ae4b5fc6238c28e34f5c4af9762b8d" protocol=ttrpc version=3 Jan 16 21:26:10.807612 containerd[1592]: time="2026-01-16T21:26:10.806219167Z" level=info msg="CreateContainer within sandbox \"8fcb1fb81dd9d07259aeb8924064bcb7cc3c4e10a7afc68f65a6c19e9f573024\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"b4d3b2a828b4de012ba14bdfa8083f5a7044f1cdc2b5ae831a5bd545c53bfd59\"" Jan 16 21:26:10.812452 containerd[1592]: time="2026-01-16T21:26:10.809828278Z" level=info msg="StartContainer for \"b4d3b2a828b4de012ba14bdfa8083f5a7044f1cdc2b5ae831a5bd545c53bfd59\"" Jan 16 21:26:10.845101 containerd[1592]: time="2026-01-16T21:26:10.839736370Z" level=info msg="connecting to shim b4d3b2a828b4de012ba14bdfa8083f5a7044f1cdc2b5ae831a5bd545c53bfd59" address="unix:///run/containerd/s/56166cc87c596ea5c1d821e73f50abe9d7da7261dbb81d57f7c349a558c938c1" protocol=ttrpc version=3 Jan 16 21:26:10.984667 containerd[1592]: time="2026-01-16T21:26:10.977749405Z" level=info msg="container event discarded" container=8601ed1ae8b2cdf8e17c48306abed5df6014782276132bd633c98bf997936c00 type=CONTAINER_CREATED_EVENT Jan 16 21:26:10.984667 containerd[1592]: time="2026-01-16T21:26:10.977792806Z" level=info msg="container event discarded" container=8601ed1ae8b2cdf8e17c48306abed5df6014782276132bd633c98bf997936c00 type=CONTAINER_STARTED_EVENT Jan 16 21:26:11.020781 systemd[1]: Started cri-containerd-ea80aa85f752ebdcca78ccab92c19f59fad037a6707ebc044fe9c951a63a04f7.scope - libcontainer container ea80aa85f752ebdcca78ccab92c19f59fad037a6707ebc044fe9c951a63a04f7. Jan 16 21:26:11.106634 kubelet[2889]: E0116 21:26:11.100091 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:26:11.246000 audit: BPF prog-id=258 op=LOAD Jan 16 21:26:11.249000 audit: BPF prog-id=259 op=LOAD Jan 16 21:26:11.249000 audit[6653]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2573 pid=6653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:11.249000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561383061613835663735326562646363613738636361623932633139 Jan 16 21:26:11.253000 audit: BPF prog-id=259 op=UNLOAD Jan 16 21:26:11.253000 audit[6653]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=6653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:11.253000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561383061613835663735326562646363613738636361623932633139 Jan 16 21:26:11.253000 audit: BPF prog-id=260 op=LOAD Jan 16 21:26:11.253000 audit[6653]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2573 pid=6653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:11.253000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561383061613835663735326562646363613738636361623932633139 Jan 16 21:26:11.253000 audit: BPF prog-id=261 op=LOAD Jan 16 21:26:11.253000 audit[6653]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2573 pid=6653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:11.253000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561383061613835663735326562646363613738636361623932633139 Jan 16 21:26:11.253000 audit: BPF prog-id=261 op=UNLOAD Jan 16 21:26:11.253000 audit[6653]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=6653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:11.253000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561383061613835663735326562646363613738636361623932633139 Jan 16 21:26:11.253000 audit: BPF prog-id=260 op=UNLOAD Jan 16 21:26:11.253000 audit[6653]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=6653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:11.253000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561383061613835663735326562646363613738636361623932633139 Jan 16 21:26:11.253000 audit: BPF prog-id=262 op=LOAD Jan 16 21:26:11.253000 audit[6653]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2573 pid=6653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:11.253000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561383061613835663735326562646363613738636361623932633139 Jan 16 21:26:11.302728 systemd[1]: Started cri-containerd-b4d3b2a828b4de012ba14bdfa8083f5a7044f1cdc2b5ae831a5bd545c53bfd59.scope - libcontainer container b4d3b2a828b4de012ba14bdfa8083f5a7044f1cdc2b5ae831a5bd545c53bfd59. Jan 16 21:26:11.425733 systemd[1]: Started cri-containerd-1b222381692ff8db4213f7ebddc84c10f5b61d3eb1a359a6c02f696320401b7c.scope - libcontainer container 1b222381692ff8db4213f7ebddc84c10f5b61d3eb1a359a6c02f696320401b7c. Jan 16 21:26:11.547000 audit: BPF prog-id=263 op=LOAD Jan 16 21:26:11.551000 audit: BPF prog-id=264 op=LOAD Jan 16 21:26:11.551000 audit[6661]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2571 pid=6661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:11.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234643362326138323862346465303132626131346264666138303833 Jan 16 21:26:11.551000 audit: BPF prog-id=264 op=UNLOAD Jan 16 21:26:11.551000 audit[6661]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2571 pid=6661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:11.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234643362326138323862346465303132626131346264666138303833 Jan 16 21:26:11.551000 audit: BPF prog-id=265 op=LOAD Jan 16 21:26:11.551000 audit[6661]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2571 pid=6661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:11.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234643362326138323862346465303132626131346264666138303833 Jan 16 21:26:11.551000 audit: BPF prog-id=266 op=LOAD Jan 16 21:26:11.551000 audit[6661]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2571 pid=6661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:11.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234643362326138323862346465303132626131346264666138303833 Jan 16 21:26:11.551000 audit: BPF prog-id=266 op=UNLOAD Jan 16 21:26:11.551000 audit[6661]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2571 pid=6661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:11.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234643362326138323862346465303132626131346264666138303833 Jan 16 21:26:11.551000 audit: BPF prog-id=265 op=UNLOAD Jan 16 21:26:11.551000 audit[6661]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2571 pid=6661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:11.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234643362326138323862346465303132626131346264666138303833 Jan 16 21:26:11.551000 audit: BPF prog-id=267 op=LOAD Jan 16 21:26:11.551000 audit[6661]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2571 pid=6661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:11.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234643362326138323862346465303132626131346264666138303833 Jan 16 21:26:11.914000 audit: BPF prog-id=268 op=LOAD Jan 16 21:26:11.989859 kernel: kauditd_printk_skb: 47 callbacks suppressed Jan 16 21:26:11.990172 kernel: audit: type=1334 audit(1768598771.914:1101): prog-id=268 op=LOAD Jan 16 21:26:11.914000 audit: BPF prog-id=269 op=LOAD Jan 16 21:26:12.023915 containerd[1592]: time="2026-01-16T21:26:12.023496716Z" level=info msg="StartContainer for \"ea80aa85f752ebdcca78ccab92c19f59fad037a6707ebc044fe9c951a63a04f7\" returns successfully" Jan 16 21:26:11.914000 audit[6660]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2979 pid=6660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:12.156589 kernel: audit: type=1334 audit(1768598771.914:1102): prog-id=269 op=LOAD Jan 16 21:26:12.156734 kernel: audit: type=1300 audit(1768598771.914:1102): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2979 pid=6660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:11.914000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162323232333831363932666638646234323133663765626464633834 Jan 16 21:26:12.254662 kernel: audit: type=1327 audit(1768598771.914:1102): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162323232333831363932666638646234323133663765626464633834 Jan 16 21:26:12.291787 kernel: audit: type=1334 audit(1768598771.914:1103): prog-id=269 op=UNLOAD Jan 16 21:26:11.914000 audit: BPF prog-id=269 op=UNLOAD Jan 16 21:26:12.416200 kernel: audit: type=1300 audit(1768598771.914:1103): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2979 pid=6660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:12.416694 kernel: audit: type=1327 audit(1768598771.914:1103): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162323232333831363932666638646234323133663765626464633834 Jan 16 21:26:11.914000 audit[6660]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2979 pid=6660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:11.914000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162323232333831363932666638646234323133663765626464633834 Jan 16 21:26:12.420737 containerd[1592]: time="2026-01-16T21:26:12.420702494Z" level=info msg="StartContainer for \"b4d3b2a828b4de012ba14bdfa8083f5a7044f1cdc2b5ae831a5bd545c53bfd59\" returns successfully" Jan 16 21:26:11.951000 audit: BPF prog-id=270 op=LOAD Jan 16 21:26:12.522511 kernel: audit: type=1334 audit(1768598771.951:1104): prog-id=270 op=LOAD Jan 16 21:26:12.655700 kernel: audit: type=1300 audit(1768598771.951:1104): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2979 pid=6660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:11.951000 audit[6660]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2979 pid=6660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:12.656616 kubelet[2889]: E0116 21:26:12.637859 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:26:11.951000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162323232333831363932666638646234323133663765626464633834 Jan 16 21:26:12.756095 kubelet[2889]: E0116 21:26:12.730550 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:26:12.761680 kernel: audit: type=1327 audit(1768598771.951:1104): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162323232333831363932666638646234323133663765626464633834 Jan 16 21:26:11.956000 audit: BPF prog-id=271 op=LOAD Jan 16 21:26:11.956000 audit[6660]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2979 pid=6660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:11.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162323232333831363932666638646234323133663765626464633834 Jan 16 21:26:11.956000 audit: BPF prog-id=271 op=UNLOAD Jan 16 21:26:11.956000 audit[6660]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2979 pid=6660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:11.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162323232333831363932666638646234323133663765626464633834 Jan 16 21:26:11.956000 audit: BPF prog-id=270 op=UNLOAD Jan 16 21:26:11.956000 audit[6660]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2979 pid=6660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:11.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162323232333831363932666638646234323133663765626464633834 Jan 16 21:26:11.956000 audit: BPF prog-id=272 op=LOAD Jan 16 21:26:11.956000 audit[6660]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2979 pid=6660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:11.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162323232333831363932666638646234323133663765626464633834 Jan 16 21:26:12.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@44-10.0.0.34:22-10.0.0.1:36500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:26:12.801113 systemd[1]: Started sshd@44-10.0.0.34:22-10.0.0.1:36500.service - OpenSSH per-connection server daemon (10.0.0.1:36500). Jan 16 21:26:12.899881 containerd[1592]: time="2026-01-16T21:26:12.894824926Z" level=info msg="StartContainer for \"1b222381692ff8db4213f7ebddc84c10f5b61d3eb1a359a6c02f696320401b7c\" returns successfully" Jan 16 21:26:13.546686 sshd[6740]: Accepted publickey for core from 10.0.0.1 port 36500 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:26:13.544000 audit[6740]: USER_ACCT pid=6740 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:13.555000 audit[6740]: CRED_ACQ pid=6740 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:13.555000 audit[6740]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff7476f440 a2=3 a3=0 items=0 ppid=1 pid=6740 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=46 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:13.555000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:26:13.564248 sshd-session[6740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:26:13.693857 systemd-logind[1570]: New session 46 of user core. Jan 16 21:26:13.731673 systemd[1]: Started session-46.scope - Session 46 of User core. Jan 16 21:26:13.820000 audit[6740]: USER_START pid=6740 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:13.898000 audit[6749]: CRED_ACQ pid=6749 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:13.903889 kubelet[2889]: E0116 21:26:13.903463 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:26:14.180175 kubelet[2889]: E0116 21:26:14.144925 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dgkw9" podUID="91f2e5a0-0976-4b7a-ac63-530715dff408" Jan 16 21:26:15.001202 kubelet[2889]: E0116 21:26:15.000833 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:26:15.225870 sshd[6749]: Connection closed by 10.0.0.1 port 36500 Jan 16 21:26:15.258185 sshd-session[6740]: pam_unix(sshd:session): session closed for user core Jan 16 21:26:15.278837 containerd[1592]: time="2026-01-16T21:26:15.258654905Z" level=info msg="container event discarded" container=7a1a32d00c71771b98b01571566871cff2a44b4d8e4d2c40e1c7a0ae47d3e59b type=CONTAINER_CREATED_EVENT Jan 16 21:26:15.278837 containerd[1592]: time="2026-01-16T21:26:15.258707984Z" level=info msg="container event discarded" container=7a1a32d00c71771b98b01571566871cff2a44b4d8e4d2c40e1c7a0ae47d3e59b type=CONTAINER_STARTED_EVENT Jan 16 21:26:15.294000 audit[6740]: USER_END pid=6740 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:15.294000 audit[6740]: CRED_DISP pid=6740 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:15.305603 systemd[1]: sshd@44-10.0.0.34:22-10.0.0.1:36500.service: Deactivated successfully. Jan 16 21:26:15.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@44-10.0.0.34:22-10.0.0.1:36500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:26:15.318910 systemd[1]: session-46.scope: Deactivated successfully. Jan 16 21:26:15.338777 systemd-logind[1570]: Session 46 logged out. Waiting for processes to exit. Jan 16 21:26:15.354760 systemd-logind[1570]: Removed session 46. Jan 16 21:26:15.847651 containerd[1592]: time="2026-01-16T21:26:15.847223032Z" level=info msg="container event discarded" container=6f53f709ad43efb3ea070336098785794fc15eed36f4e0d0d1f26a770a337298 type=CONTAINER_CREATED_EVENT Jan 16 21:26:15.847651 containerd[1592]: time="2026-01-16T21:26:15.847611396Z" level=info msg="container event discarded" container=6f53f709ad43efb3ea070336098785794fc15eed36f4e0d0d1f26a770a337298 type=CONTAINER_STARTED_EVENT Jan 16 21:26:16.128618 kubelet[2889]: E0116 21:26:16.127128 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:26:17.133579 kubelet[2889]: E0116 21:26:17.132792 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" podUID="7e12a227-9190-436c-a55f-74274779eb32" Jan 16 21:26:17.135763 kubelet[2889]: E0116 21:26:17.134547 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" podUID="f80ed623-af1a-45e7-a125-0c7c2229f592" Jan 16 21:26:17.488822 containerd[1592]: time="2026-01-16T21:26:17.488751001Z" level=info msg="container event discarded" container=5abe4b5e9016fb7bcc1b7f8790cbdb97ba2116c3e3b675cc2f39915a3a842acf type=CONTAINER_CREATED_EVENT Jan 16 21:26:17.494862 containerd[1592]: time="2026-01-16T21:26:17.492557118Z" level=info msg="container event discarded" container=5abe4b5e9016fb7bcc1b7f8790cbdb97ba2116c3e3b675cc2f39915a3a842acf type=CONTAINER_STARTED_EVENT Jan 16 21:26:17.843533 containerd[1592]: time="2026-01-16T21:26:17.841728688Z" level=info msg="container event discarded" container=aae5a0f0a772ed371ab151517cb931fa96e10a29b270d5c50f504f65afa5b017 type=CONTAINER_CREATED_EVENT Jan 16 21:26:18.100855 kubelet[2889]: E0116 21:26:18.096208 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77c5fd8d7d-qxwpz" podUID="a053e2c3-3297-4e1f-bf5a-da2b545cc5db" Jan 16 21:26:18.430877 containerd[1592]: time="2026-01-16T21:26:18.430440730Z" level=info msg="container event discarded" container=cfe9d3a3878f3419059aebad06e0edfcdc1dcd3c9d06552472bf0a613e70545f type=CONTAINER_CREATED_EVENT Jan 16 21:26:18.430877 containerd[1592]: time="2026-01-16T21:26:18.430491965Z" level=info msg="container event discarded" container=cfe9d3a3878f3419059aebad06e0edfcdc1dcd3c9d06552472bf0a613e70545f type=CONTAINER_STARTED_EVENT Jan 16 21:26:18.551657 containerd[1592]: time="2026-01-16T21:26:18.551588250Z" level=info msg="container event discarded" container=aae5a0f0a772ed371ab151517cb931fa96e10a29b270d5c50f504f65afa5b017 type=CONTAINER_STARTED_EVENT Jan 16 21:26:18.828758 containerd[1592]: time="2026-01-16T21:26:18.826189370Z" level=info msg="container event discarded" container=e2bf04b540c203bb7bccb7bf3ab624930dff0c170756317431c892cc1c735848 type=CONTAINER_CREATED_EVENT Jan 16 21:26:18.828758 containerd[1592]: time="2026-01-16T21:26:18.826722724Z" level=info msg="container event discarded" container=e2bf04b540c203bb7bccb7bf3ab624930dff0c170756317431c892cc1c735848 type=CONTAINER_STARTED_EVENT Jan 16 21:26:19.297706 containerd[1592]: time="2026-01-16T21:26:19.297498663Z" level=info msg="container event discarded" container=c77fb3c8ed45ed1d1409152c81e193d63a18d160a450d09fa041692badc667fb type=CONTAINER_CREATED_EVENT Jan 16 21:26:20.199788 containerd[1592]: time="2026-01-16T21:26:20.199697806Z" level=info msg="container event discarded" container=fa7c474d0db17b29d43ec5526d244107d62bf600c4e179b8814d2cbacd182e34 type=CONTAINER_CREATED_EVENT Jan 16 21:26:20.199788 containerd[1592]: time="2026-01-16T21:26:20.199756786Z" level=info msg="container event discarded" container=fa7c474d0db17b29d43ec5526d244107d62bf600c4e179b8814d2cbacd182e34 type=CONTAINER_STARTED_EVENT Jan 16 21:26:20.251477 containerd[1592]: time="2026-01-16T21:26:20.250578094Z" level=info msg="container event discarded" container=c77fb3c8ed45ed1d1409152c81e193d63a18d160a450d09fa041692badc667fb type=CONTAINER_STARTED_EVENT Jan 16 21:26:20.362693 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 16 21:26:20.362828 kernel: audit: type=1130 audit(1768598780.314:1118): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@45-10.0.0.34:22-10.0.0.1:36502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:26:20.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@45-10.0.0.34:22-10.0.0.1:36502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:26:20.317732 systemd[1]: Started sshd@45-10.0.0.34:22-10.0.0.1:36502.service - OpenSSH per-connection server daemon (10.0.0.1:36502). Jan 16 21:26:20.928565 sshd[6776]: Accepted publickey for core from 10.0.0.1 port 36502 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:26:20.935932 sshd-session[6776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:26:20.925000 audit[6776]: USER_ACCT pid=6776 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:20.982929 systemd-logind[1570]: New session 47 of user core. Jan 16 21:26:20.930000 audit[6776]: CRED_ACQ pid=6776 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:21.127937 kubelet[2889]: E0116 21:26:21.103910 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" podUID="ac06912f-e290-4031-a848-1392298fa9de" Jan 16 21:26:21.215599 kernel: audit: type=1101 audit(1768598780.925:1119): pid=6776 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:21.215712 kernel: audit: type=1103 audit(1768598780.930:1120): pid=6776 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:21.218148 kernel: audit: type=1006 audit(1768598780.930:1121): pid=6776 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=47 res=1 Jan 16 21:26:21.296674 containerd[1592]: time="2026-01-16T21:26:21.296593432Z" level=info msg="container event discarded" container=feab9dc0a489c601c14c4aa1a3d7837f546ddd5d189163bd26b972f63a672083 type=CONTAINER_CREATED_EVENT Jan 16 21:26:21.297770 containerd[1592]: time="2026-01-16T21:26:21.297723229Z" level=info msg="container event discarded" container=feab9dc0a489c601c14c4aa1a3d7837f546ddd5d189163bd26b972f63a672083 type=CONTAINER_STARTED_EVENT Jan 16 21:26:20.930000 audit[6776]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffbf590530 a2=3 a3=0 items=0 ppid=1 pid=6776 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=47 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:21.434553 kernel: audit: type=1300 audit(1768598780.930:1121): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffbf590530 a2=3 a3=0 items=0 ppid=1 pid=6776 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=47 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:20.930000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:26:21.443805 systemd[1]: Started session-47.scope - Session 47 of User core. Jan 16 21:26:21.483438 kernel: audit: type=1327 audit(1768598780.930:1121): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:26:21.716479 kernel: audit: type=1105 audit(1768598781.558:1122): pid=6776 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:21.558000 audit[6776]: USER_START pid=6776 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:21.585000 audit[6781]: CRED_ACQ pid=6781 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:21.815242 kernel: audit: type=1103 audit(1768598781.585:1123): pid=6781 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:22.280133 kubelet[2889]: E0116 21:26:22.278242 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:26:23.348193 sshd[6781]: Connection closed by 10.0.0.1 port 36502 Jan 16 21:26:23.353728 sshd-session[6776]: pam_unix(sshd:session): session closed for user core Jan 16 21:26:23.509165 kernel: audit: type=1106 audit(1768598783.419:1124): pid=6776 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:23.419000 audit[6776]: USER_END pid=6776 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:23.501649 systemd[1]: sshd@45-10.0.0.34:22-10.0.0.1:36502.service: Deactivated successfully. Jan 16 21:26:23.513556 systemd[1]: session-47.scope: Deactivated successfully. Jan 16 21:26:23.606816 kernel: audit: type=1104 audit(1768598783.421:1125): pid=6776 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:23.421000 audit[6776]: CRED_DISP pid=6776 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:23.582224 systemd-logind[1570]: Session 47 logged out. Waiting for processes to exit. Jan 16 21:26:23.612762 systemd-logind[1570]: Removed session 47. Jan 16 21:26:23.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@45-10.0.0.34:22-10.0.0.1:36502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:26:24.376839 kubelet[2889]: E0116 21:26:24.376743 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:26:25.104744 kubelet[2889]: E0116 21:26:25.099168 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dgkw9" podUID="91f2e5a0-0976-4b7a-ac63-530715dff408" Jan 16 21:26:26.128230 kubelet[2889]: E0116 21:26:26.109972 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:26:26.398629 kubelet[2889]: E0116 21:26:26.398223 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:26:28.101904 kubelet[2889]: E0116 21:26:28.101781 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" podUID="f80ed623-af1a-45e7-a125-0c7c2229f592" Jan 16 21:26:28.425118 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:26:28.425386 kernel: audit: type=1130 audit(1768598788.407:1127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@46-10.0.0.34:22-10.0.0.1:49780 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:26:28.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@46-10.0.0.34:22-10.0.0.1:49780 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:26:28.408963 systemd[1]: Started sshd@46-10.0.0.34:22-10.0.0.1:49780.service - OpenSSH per-connection server daemon (10.0.0.1:49780). Jan 16 21:26:28.657000 audit[6797]: USER_ACCT pid=6797 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:28.658888 sshd[6797]: Accepted publickey for core from 10.0.0.1 port 49780 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:26:28.694961 sshd-session[6797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:26:28.727424 kernel: audit: type=1101 audit(1768598788.657:1128): pid=6797 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:28.727538 kernel: audit: type=1103 audit(1768598788.673:1129): pid=6797 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:28.673000 audit[6797]: CRED_ACQ pid=6797 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:28.732382 systemd-logind[1570]: New session 48 of user core. Jan 16 21:26:28.673000 audit[6797]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffed767fb30 a2=3 a3=0 items=0 ppid=1 pid=6797 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=48 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:28.827567 kernel: audit: type=1006 audit(1768598788.673:1130): pid=6797 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=48 res=1 Jan 16 21:26:28.827740 kernel: audit: type=1300 audit(1768598788.673:1130): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffed767fb30 a2=3 a3=0 items=0 ppid=1 pid=6797 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=48 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:28.852124 kernel: audit: type=1327 audit(1768598788.673:1130): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:26:28.673000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:26:28.895160 systemd[1]: Started session-48.scope - Session 48 of User core. Jan 16 21:26:28.917000 audit[6797]: USER_START pid=6797 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:28.947178 kernel: audit: type=1105 audit(1768598788.917:1131): pid=6797 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:28.947497 kernel: audit: type=1103 audit(1768598788.928:1132): pid=6801 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:28.928000 audit[6801]: CRED_ACQ pid=6801 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:29.594387 sshd[6801]: Connection closed by 10.0.0.1 port 49780 Jan 16 21:26:29.600811 sshd-session[6797]: pam_unix(sshd:session): session closed for user core Jan 16 21:26:29.606000 audit[6797]: USER_END pid=6797 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:29.631856 systemd-logind[1570]: Session 48 logged out. Waiting for processes to exit. Jan 16 21:26:29.636040 systemd[1]: sshd@46-10.0.0.34:22-10.0.0.1:49780.service: Deactivated successfully. Jan 16 21:26:29.645799 systemd[1]: session-48.scope: Deactivated successfully. Jan 16 21:26:29.647469 kernel: audit: type=1106 audit(1768598789.606:1133): pid=6797 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:29.613000 audit[6797]: CRED_DISP pid=6797 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:29.653402 systemd-logind[1570]: Removed session 48. Jan 16 21:26:29.692393 kernel: audit: type=1104 audit(1768598789.613:1134): pid=6797 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:29.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@46-10.0.0.34:22-10.0.0.1:49780 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:26:31.058125 kubelet[2889]: E0116 21:26:31.057863 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" podUID="7e12a227-9190-436c-a55f-74274779eb32" Jan 16 21:26:32.081869 containerd[1592]: time="2026-01-16T21:26:32.079367407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 21:26:32.109207 kubelet[2889]: E0116 21:26:32.108946 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:26:32.202068 containerd[1592]: time="2026-01-16T21:26:32.201725639Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:26:32.213616 containerd[1592]: time="2026-01-16T21:26:32.206781887Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 21:26:32.213616 containerd[1592]: time="2026-01-16T21:26:32.207770441Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 16 21:26:32.213908 kubelet[2889]: E0116 21:26:32.211861 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 21:26:32.213908 kubelet[2889]: E0116 21:26:32.211927 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 21:26:32.213908 kubelet[2889]: E0116 21:26:32.212761 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-77c5fd8d7d-qxwpz_calico-system(a053e2c3-3297-4e1f-bf5a-da2b545cc5db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 21:26:32.216346 kubelet[2889]: E0116 21:26:32.216145 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77c5fd8d7d-qxwpz" podUID="a053e2c3-3297-4e1f-bf5a-da2b545cc5db" Jan 16 21:26:33.086391 kubelet[2889]: E0116 21:26:33.063217 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" podUID="ac06912f-e290-4031-a848-1392298fa9de" Jan 16 21:26:34.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@47-10.0.0.34:22-10.0.0.1:59688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:26:34.626647 systemd[1]: Started sshd@47-10.0.0.34:22-10.0.0.1:59688.service - OpenSSH per-connection server daemon (10.0.0.1:59688). Jan 16 21:26:34.636712 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:26:34.636789 kernel: audit: type=1130 audit(1768598794.624:1136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@47-10.0.0.34:22-10.0.0.1:59688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:26:34.832805 sshd[6823]: Accepted publickey for core from 10.0.0.1 port 59688 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:26:34.827000 audit[6823]: USER_ACCT pid=6823 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:34.840636 sshd-session[6823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:26:34.852844 systemd-logind[1570]: New session 49 of user core. Jan 16 21:26:34.864980 kernel: audit: type=1101 audit(1768598794.827:1137): pid=6823 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:34.835000 audit[6823]: CRED_ACQ pid=6823 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:34.928319 kernel: audit: type=1103 audit(1768598794.835:1138): pid=6823 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:34.928438 kernel: audit: type=1006 audit(1768598794.835:1139): pid=6823 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=49 res=1 Jan 16 21:26:34.928496 kernel: audit: type=1300 audit(1768598794.835:1139): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe47d5c950 a2=3 a3=0 items=0 ppid=1 pid=6823 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=49 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:34.835000 audit[6823]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe47d5c950 a2=3 a3=0 items=0 ppid=1 pid=6823 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=49 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:34.835000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:26:34.957540 kernel: audit: type=1327 audit(1768598794.835:1139): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:26:34.957800 systemd[1]: Started session-49.scope - Session 49 of User core. Jan 16 21:26:34.971000 audit[6823]: USER_START pid=6823 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:35.022919 kernel: audit: type=1105 audit(1768598794.971:1140): pid=6823 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:35.023096 kernel: audit: type=1103 audit(1768598794.995:1141): pid=6827 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:34.995000 audit[6827]: CRED_ACQ pid=6827 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:35.102451 kubelet[2889]: E0116 21:26:35.101365 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:26:35.305676 sshd[6827]: Connection closed by 10.0.0.1 port 59688 Jan 16 21:26:35.306554 sshd-session[6823]: pam_unix(sshd:session): session closed for user core Jan 16 21:26:35.322000 audit[6823]: USER_END pid=6823 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:35.337963 systemd[1]: sshd@47-10.0.0.34:22-10.0.0.1:59688.service: Deactivated successfully. Jan 16 21:26:35.345838 systemd[1]: session-49.scope: Deactivated successfully. Jan 16 21:26:35.360891 systemd-logind[1570]: Session 49 logged out. Waiting for processes to exit. Jan 16 21:26:35.396499 systemd-logind[1570]: Removed session 49. Jan 16 21:26:35.425386 kernel: audit: type=1106 audit(1768598795.322:1142): pid=6823 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:35.322000 audit[6823]: CRED_DISP pid=6823 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:35.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@47-10.0.0.34:22-10.0.0.1:59688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:26:35.452559 kernel: audit: type=1104 audit(1768598795.322:1143): pid=6823 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:38.082693 kubelet[2889]: E0116 21:26:38.080669 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dgkw9" podUID="91f2e5a0-0976-4b7a-ac63-530715dff408" Jan 16 21:26:40.392174 systemd[1]: Started sshd@48-10.0.0.34:22-10.0.0.1:59694.service - OpenSSH per-connection server daemon (10.0.0.1:59694). Jan 16 21:26:40.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@48-10.0.0.34:22-10.0.0.1:59694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:26:40.422379 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:26:40.427691 kernel: audit: type=1130 audit(1768598800.395:1145): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@48-10.0.0.34:22-10.0.0.1:59694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:26:40.570000 audit[6868]: USER_ACCT pid=6868 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:40.576921 sshd[6868]: Accepted publickey for core from 10.0.0.1 port 59694 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:26:40.575556 sshd-session[6868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:26:40.570000 audit[6868]: CRED_ACQ pid=6868 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:40.592497 systemd-logind[1570]: New session 50 of user core. Jan 16 21:26:40.606150 kernel: audit: type=1101 audit(1768598800.570:1146): pid=6868 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:40.606399 kernel: audit: type=1103 audit(1768598800.570:1147): pid=6868 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:40.606457 kernel: audit: type=1006 audit(1768598800.570:1148): pid=6868 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=50 res=1 Jan 16 21:26:40.621085 kernel: audit: type=1300 audit(1768598800.570:1148): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc871862e0 a2=3 a3=0 items=0 ppid=1 pid=6868 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=50 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:40.570000 audit[6868]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc871862e0 a2=3 a3=0 items=0 ppid=1 pid=6868 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=50 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:40.657447 kernel: audit: type=1327 audit(1768598800.570:1148): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:26:40.570000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:26:40.671399 systemd[1]: Started session-50.scope - Session 50 of User core. Jan 16 21:26:40.686000 audit[6868]: USER_START pid=6868 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:40.696000 audit[6872]: CRED_ACQ pid=6872 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:40.738515 kernel: audit: type=1105 audit(1768598800.686:1149): pid=6868 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:40.738655 kernel: audit: type=1103 audit(1768598800.696:1150): pid=6872 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:41.037127 sshd[6872]: Connection closed by 10.0.0.1 port 59694 Jan 16 21:26:41.037564 sshd-session[6868]: pam_unix(sshd:session): session closed for user core Jan 16 21:26:41.061000 audit[6868]: USER_END pid=6868 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:41.074810 systemd[1]: sshd@48-10.0.0.34:22-10.0.0.1:59694.service: Deactivated successfully. Jan 16 21:26:41.085437 systemd[1]: session-50.scope: Deactivated successfully. Jan 16 21:26:41.087650 systemd-logind[1570]: Session 50 logged out. Waiting for processes to exit. Jan 16 21:26:41.092588 systemd-logind[1570]: Removed session 50. Jan 16 21:26:41.062000 audit[6868]: CRED_DISP pid=6868 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:41.128181 kernel: audit: type=1106 audit(1768598801.061:1151): pid=6868 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:41.128632 kernel: audit: type=1104 audit(1768598801.062:1152): pid=6868 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:41.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@48-10.0.0.34:22-10.0.0.1:59694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:26:42.089378 kubelet[2889]: E0116 21:26:42.086169 2889 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:26:43.080636 kubelet[2889]: E0116 21:26:43.080415 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c7c579b5f-dsr85" podUID="f80ed623-af1a-45e7-a125-0c7c2229f592" Jan 16 21:26:43.699000 audit[6892]: NETFILTER_CFG table=filter:146 family=2 entries=26 op=nft_register_rule pid=6892 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:26:43.699000 audit[6892]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcad17f6a0 a2=0 a3=7ffcad17f68c items=0 ppid=3054 pid=6892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:43.699000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:26:43.736000 audit[6892]: NETFILTER_CFG table=nat:147 family=2 entries=104 op=nft_register_chain pid=6892 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:26:43.736000 audit[6892]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffcad17f6a0 a2=0 a3=7ffcad17f68c items=0 ppid=3054 pid=6892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:43.736000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:26:44.099493 kubelet[2889]: E0116 21:26:44.084428 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-cb6w2" podUID="7e12a227-9190-436c-a55f-74274779eb32" Jan 16 21:26:45.179850 kubelet[2889]: E0116 21:26:45.178660 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5984987-nz6bj" podUID="ac06912f-e290-4031-a848-1392298fa9de" Jan 16 21:26:46.123433 systemd[1]: Started sshd@49-10.0.0.34:22-10.0.0.1:39336.service - OpenSSH per-connection server daemon (10.0.0.1:39336). Jan 16 21:26:46.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@49-10.0.0.34:22-10.0.0.1:39336 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:26:46.129731 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 16 21:26:46.134739 kernel: audit: type=1130 audit(1768598806.121:1156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@49-10.0.0.34:22-10.0.0.1:39336 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:26:46.655000 audit[6894]: USER_ACCT pid=6894 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:46.700649 kernel: audit: type=1101 audit(1768598806.655:1157): pid=6894 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:46.700886 sshd[6894]: Accepted publickey for core from 10.0.0.1 port 39336 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:26:46.701000 audit[6894]: CRED_ACQ pid=6894 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:46.732438 kernel: audit: type=1103 audit(1768598806.701:1158): pid=6894 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:46.712173 sshd-session[6894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:26:46.747333 kernel: audit: type=1006 audit(1768598806.701:1159): pid=6894 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=51 res=1 Jan 16 21:26:46.701000 audit[6894]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0da7bff0 a2=3 a3=0 items=0 ppid=1 pid=6894 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=51 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:46.780985 kernel: audit: type=1300 audit(1768598806.701:1159): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0da7bff0 a2=3 a3=0 items=0 ppid=1 pid=6894 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=51 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:46.701000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:26:46.784680 systemd-logind[1570]: New session 51 of user core. Jan 16 21:26:46.791494 kernel: audit: type=1327 audit(1768598806.701:1159): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:26:46.814796 systemd[1]: Started session-51.scope - Session 51 of User core. Jan 16 21:26:46.828000 audit[6894]: USER_START pid=6894 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:46.883633 kernel: audit: type=1105 audit(1768598806.828:1160): pid=6894 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:46.837000 audit[6899]: CRED_ACQ pid=6899 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:46.916749 kernel: audit: type=1103 audit(1768598806.837:1161): pid=6899 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:47.201747 containerd[1592]: time="2026-01-16T21:26:47.201570272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 21:26:47.290752 sshd[6899]: Connection closed by 10.0.0.1 port 39336 Jan 16 21:26:47.291143 sshd-session[6894]: pam_unix(sshd:session): session closed for user core Jan 16 21:26:47.295000 audit[6894]: USER_END pid=6894 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:47.316022 systemd[1]: sshd@49-10.0.0.34:22-10.0.0.1:39336.service: Deactivated successfully. Jan 16 21:26:47.332199 kernel: audit: type=1106 audit(1768598807.295:1162): pid=6894 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:47.338421 kernel: audit: type=1104 audit(1768598807.298:1163): pid=6894 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:47.298000 audit[6894]: CRED_DISP pid=6894 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:47.338744 systemd[1]: session-51.scope: Deactivated successfully. Jan 16 21:26:47.347892 systemd-logind[1570]: Session 51 logged out. Waiting for processes to exit. Jan 16 21:26:47.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@49-10.0.0.34:22-10.0.0.1:39336 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:26:47.357696 systemd-logind[1570]: Removed session 51. Jan 16 21:26:47.394587 containerd[1592]: time="2026-01-16T21:26:47.394405472Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:26:47.407728 containerd[1592]: time="2026-01-16T21:26:47.407482993Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 21:26:47.407728 containerd[1592]: time="2026-01-16T21:26:47.407681694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 16 21:26:47.415104 kubelet[2889]: E0116 21:26:47.414595 2889 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 21:26:47.420936 kubelet[2889]: E0116 21:26:47.415859 2889 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 21:26:47.420936 kubelet[2889]: E0116 21:26:47.419456 2889 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-77c5fd8d7d-qxwpz_calico-system(a053e2c3-3297-4e1f-bf5a-da2b545cc5db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 21:26:47.420936 kubelet[2889]: E0116 21:26:47.419586 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77c5fd8d7d-qxwpz" podUID="a053e2c3-3297-4e1f-bf5a-da2b545cc5db" Jan 16 21:26:48.094417 kubelet[2889]: E0116 21:26:48.093964 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6hngd" podUID="b99000d7-a136-4299-82d0-76fa7e3c28f2" Jan 16 21:26:51.088802 kubelet[2889]: E0116 21:26:51.082666 2889 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dgkw9" podUID="91f2e5a0-0976-4b7a-ac63-530715dff408" Jan 16 21:26:52.315042 systemd[1]: Started sshd@50-10.0.0.34:22-10.0.0.1:40896.service - OpenSSH per-connection server daemon (10.0.0.1:40896). Jan 16 21:26:52.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@50-10.0.0.34:22-10.0.0.1:40896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:26:52.330426 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:26:52.330615 kernel: audit: type=1130 audit(1768598812.315:1165): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@50-10.0.0.34:22-10.0.0.1:40896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:26:52.508000 audit[6913]: USER_ACCT pid=6913 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:52.524538 sshd-session[6913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:26:52.525940 kernel: audit: type=1101 audit(1768598812.508:1166): pid=6913 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:52.525999 sshd[6913]: Accepted publickey for core from 10.0.0.1 port 40896 ssh2: RSA SHA256:CYpGSfsv9hWitfo2WoTrgGaVBd419Q4JX2uYCYupJ8A Jan 16 21:26:52.520000 audit[6913]: CRED_ACQ pid=6913 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:52.545988 systemd-logind[1570]: New session 52 of user core. Jan 16 21:26:52.554163 kernel: audit: type=1103 audit(1768598812.520:1167): pid=6913 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:52.554319 kernel: audit: type=1006 audit(1768598812.522:1168): pid=6913 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=52 res=1 Jan 16 21:26:52.580936 kernel: audit: type=1300 audit(1768598812.522:1168): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe91100ac0 a2=3 a3=0 items=0 ppid=1 pid=6913 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=52 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:52.522000 audit[6913]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe91100ac0 a2=3 a3=0 items=0 ppid=1 pid=6913 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=52 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:26:52.522000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:26:52.587436 kernel: audit: type=1327 audit(1768598812.522:1168): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:26:52.587938 systemd[1]: Started session-52.scope - Session 52 of User core. Jan 16 21:26:52.607000 audit[6913]: USER_START pid=6913 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:52.634495 kernel: audit: type=1105 audit(1768598812.607:1169): pid=6913 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:52.618000 audit[6917]: CRED_ACQ pid=6917 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:52.650474 kernel: audit: type=1103 audit(1768598812.618:1170): pid=6917 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:52.846658 sshd[6917]: Connection closed by 10.0.0.1 port 40896 Jan 16 21:26:52.848728 sshd-session[6913]: pam_unix(sshd:session): session closed for user core Jan 16 21:26:52.853000 audit[6913]: USER_END pid=6913 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:52.863867 systemd[1]: sshd@50-10.0.0.34:22-10.0.0.1:40896.service: Deactivated successfully. Jan 16 21:26:52.874335 kernel: audit: type=1106 audit(1768598812.853:1171): pid=6913 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:52.873491 systemd[1]: session-52.scope: Deactivated successfully. Jan 16 21:26:52.853000 audit[6913]: CRED_DISP pid=6913 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:52.878663 systemd-logind[1570]: Session 52 logged out. Waiting for processes to exit. Jan 16 21:26:52.881002 systemd-logind[1570]: Removed session 52. Jan 16 21:26:52.892481 kernel: audit: type=1104 audit(1768598812.853:1172): pid=6913 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:26:52.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@50-10.0.0.34:22-10.0.0.1:40896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'